Sunteți pe pagina 1din 24

1

QUANTUM MANTRAS
Review of and supplement to P.C.W. Davies, Quantum Mechanics (1984)
Frank Munley, June 2011

CHAPTER 1: Preliminary concepts

1. Quantum mechanics is “weird” in the sense that it describes properties of matter that don’t seem to
match and indeed contradict what we observe or are habituated to at the human level. One hallmark of this
weirdness is “wave-particle duality,” i.e., depending on the experimental circumstances, it is sometimes
convenient to treat a physical system as having wave properties, and at other times as having particle
properties. How something can simultaneously behave like a wave and a particle is counterintuitive to the
extreme. For example, just as a beam of light shows an interference pattern when it passes through two
slits in an opaque screen, so an electron beam exhibits an interference pattern even though electrons behave
like particles in many situations. The De Broglie relationship between the particle property of
momentum and wave property wavelength is: p = h / " = hk , where h = h / 2" , h = Planck’s constant,
and k = 2" / # .

2. Another manifestation of wave-particle duality is the relationship between energy and frequency:
!
! both a material particle
E = h" = h# . Although E = h" holds for which has a rest mass and a photon whose
! rest mass is zero, an important difference arises in the relationship between the wave number k and the
frequency " for these two particles. For a photon, " = kv , i.e., " = kc where c = speed of light, while for
a material particle, " = (h / 2m)k 2 . This important difference leads to the Schrödinger equation for material
! !
particles stated below.
! ! !
3. Uncertainty in frequency: Frequency is most directly measured by counting the number of wave peaks
passing a !
fixed point. If the counting lasts a time " , and N peaks are counted, then " = N /# . But the
minimal uncertainty in the number is 1, so "# = 1/$ . Measuring frequency by counting: Δν = 1/τ, where τ
is the time taken to count. " can also be a measure of the “uncertainty in time” " t .
! !
4. Items 2 and 3 lead to the Heisenberg uncertainty relation for energy and time is: "E"t # h , so it is
!
impossible to carry
! out perfectly accurate measurements of both energy and time interval over which the
measurement is made. !

5. " is the wave function which can be used to describe the energy, position,!momentum, etc. of a
material particle. All measurements give real numbers, but " is generally complex. So by itself, " is not
2
observable; but " links us to physical reality:
2
! In one dimension, " (x,t) dx = probability that a particle can be found (when a position
! ! 2
measurement is made) between x and x + dx. In three dimensions, " (r,t) d# =
! probability that a particle can be found in the volume element d" about the position r.
(N.B.: d"! is a common notation for a 3-D volume element and has nothing to do with
counting time.) The probability that a particle can be found in a limited region is given
2 !
by the integral of " (r,t) d# over that region. In ! particular, in one dimension, the
x2
! probability that a particle can be found between x and x is 2
1 2 # x1
" (x, t) dx . Since
x2 2 x2 2
probabilities
! should sum to 1, # x1
" (x, t) dx = 1 (or, in 3-D, $ x1
" (r, t) d# = 1 ) with
the integration carried out over all space. In this case, we say the wave function is
!
normalized.

6. Consider a particle localized! on a one-dimensional line in a region


! of size " x . Since the particle also has
a wave nature, the question arises: What range of wavelengths have to be added together to form a wave
function, or in this context a localized wave packet, of size " x ? A rather straightforward application of
Fourier analysis gives us the answer. To suggest how this works, suppose the particle is known to be in a
!

!
2

one-dimensional “line box” of size " x and we want to determine what range of wavelengths that are
required to cancel outside the box but add together in the box. This can be done precisely using Fourier
analysis, but we can “guesstimate” the range of wavelengths needed as follows.

The diagram shows! the one-dimensional box containing the particle. The wave function has the
constant value " 0 inside the box and zero outside. Also shown are three sinusoidal waves, each wave
representing a particular possible momentum of the particle if the momentum is measured. The figure
suggests that the waves add together inside the box but outside sometimes add but sometimes cancel.
! The particle is equally likely to be found
anywhere between –L/2 and L/2. The
" probability of being anywhere in the box
is constant from point to point, so the
" 0 probability of being between x and x +
dx is constant: Prob(x, x+dx) =
|ψ0|2dx.
!
! x
-L/2 L/2

These waves help out, because they are all non-zero from –L/2 to +L/2 and tend to cancel outside. In other
words, waves with " # L / 2 , if combined in the correct proportions as determined by Fourier analysis, can
produce a wave function which is equal to " 0 between –L/2 and +L/2. Contrast these waves with shorter-
wavelength waves:
"
! "0
!

!
! x
-L/2 L/2
As the diagram suggests, these waves tend to cancel out almost everywhere inside the box, and are largely
unnecessary to produce the desired " . But they can’t be completely ignored, because combined in proper
proportion, as determined by Fourier analysis, they help the larger waves to cancel outside the box and to
produce the constant " 0 in the box.

A precise Fourier!analysis yields the amplitude as a function of wavelength in order to achieve


precisely the desired " . It is more convenient to do the analysis in terms of the amplitude as a function of
wave number ! k rather than wavelength. The following diagram shows the result, where A(k) is the
amplitude for L = 1 (and " 0 = 1):
! A(k)

2" / L

! k
3

2" 0 sin(kL / 2)
In general, for a 1-dimensional box between –L/2 and +L/2, A(k) = , and
2# k
" 0 = 1/ L . As the graph shows, the greatest amplitude is for waves between ±2 " / L for a spread of
4 " / L . But the amplitude is small as one goes to the limits of this range, so a reasonable estimate of the
spread or uncertainty in k is about half of this, or 2" / L . Since
! p = hk , then "p = h"k # 2$h / L = h / L .
But the particle can be anywhere inside the box with equal probability, ! so L is a measure of the uncertainty
! in the position of the box, i.e., "x = L . Therefore, "p"x # h .
!
! ! bounces inside
! the box between its ends. In a
In our box example, the particle is free and just
more complicated situation, the particle might be on a rubber band while bouncing back and forth, or
! ! function. In any case, the uncertainty relation is an order-of-
subject to some other type of potential energy
magnitude estimate, and the more general formulation of the uncertainty relation is framed in terms of h :

The Heisenberg uncertainty relation for position and momentum: "x"p # h .


!
The significance is that a more accurate measurement of p creates a greater uncertainty in x, and vice versa.
Most significantly, perfectly accurate measurements of position and momentum are impossible to carry out:
the product of their uncertainties must always be h . !

7. " must be single-valued, well-behaved (can’t blow up), and it and its first derivative (or gradient in 3-D)
must be continuous. One exception to the latter rule: where the potential is infinite, just set " =0.
!
8. Just as particles can flow from one place to another, so the probability of the particles being found at one
! place or another can flow. The “probability current density” in one dimension is:
!
ih
j(r, t ) = # (! * "! # !"! *) .
2m
9. The weighted average of something is just a sum of the values it can have times the probability of
having the values. E.g., in one dimension,
%
xn = & $%
" # x n" dx .

10. " can be determined from the Schrödinger Equation (V(r) is the potential affecting the particle of
mass m):
! 2
#h !$
" 2$ + V (r )$ = ih (A KEY EQUATION!)
! 2m !t
CHAPTER 2: Wave mechanics I

1. The Schrödinger Equation is separable into time and space parts. The space part of ψ(r,t) is u(r) or just
u:
"h 2 2
2m
# u + V (r)u = Eu (ANOTHER KEY EQUATION!)
This equation is a “recipe” for grinding out allowed energy values (or eigenvalue) for a particle bound in a
potential. The wave function associated with each energy eigenvalue is called an energy eigenfunction.
!2. When a particle is free, the allowed energy values in the Schrödinger equation are continuous; if the
particle is bound (e.g., in an infinite or finite square well, by a harmonic force, etc.), the energy values are
discrete—only special values, E1, E2, …etc., are allowed.

! iEt / h
3. The time part of the Schrödinger equation tells us that " ( x, t ) = u ( x )e for a particle in a well-
defined energy state. Therefore, |ψ|2 = |u|2 and to normalize u is to normalize ψ.
4

4. Most problems are very difficult to solve in quantum mechanics, but the infinite square well is an
exception. In one dimension (1-D), this corresponds to a line with rigid “walls” at both ends. The form of
the wave functions is completely similar to waves on a string, and that is the easiest way to generate them.
However, you should also know how to obtain them by a systematic solution of the 1-D Schrödinger
equation. The solutions are shown here (think of waves on a string with the endpoints fixed, so the wave is
zero there), and the general form is: for n odd, and for n odd.

1 # n"x & 1 # n"x &


un = cos% ( for n odd, and un = sin % ( for n even.
a $ 2a ' a $ 2a '

As the captions of the diagrams show, the de Broglie wavelengths are, in general, 4a/n.

! !
V(x) or V(x) or
u(x) u1; λ = 4a u(x)
(i.e., 4a/1) u2; λ = 2a
(i.e., 4a/2)

-a +a -a +a

V(x) or V(x) or
u(x) u(x)
u3; λ = 4a/3 u4; λ = a
(i.e., 4a/4)

-a +a -a +a

5. The energies of the particle in an infinite square well are purely kinetic. Since k = 2π/λ and p = ħk, we
have the following equivalent expressions for En: En = pn2/2m = ħ2kn2/2m = ħ2(4π2)/(2mλn2) =
ħ2(4π2)/[2m(4a/n)2] = n2π2ħ2/(8ma2), n = 1,2,3,…

6. Theorem: If V(x) is symmetric, then the non-degenerate wave functions un have definite odd or even
parity. (Example: the infinite square well potential. The un’s are either sines or cosines with definite
parity.)

7. The energy eigenvalues of the finite square well (from –a to +a, i.e., V(x) is symmetric) are determined
by graphically solving the following equations:

tan(αa) = β/α (even parity), cot(αa) = -β/α (odd parity), with

! = (2mE / h 2 )1 / 2 , and " = [2m(Vo ! E ) / h 2 ]1 / 2 .

CHAPTER 3: Wave mechanics II

1. The simple harmonic oscillator, for which V(x) = Kx2 /2, has energy values E n = (n + 12 )h! , and
1/ 2
( ! % 2 2
x /2
eigenfunctions u n = & 1 / 2 n # H n ( ! x )e * ! , with ! = ( m" / h )1 / 2 and H n ( ! x ) is the
' ) 2 n! $
nth Hermite polynomial. Note that the lowest energy state has non-zero energy of ħω/2. This is a reflection
of the uncertainty principle: zero energy would imply that momentum and position are both exactly equal
to zero, which is prohibited by the uncertainty principle.
5

2. The hydrogen atom in its simplest treatment has the electron subjected to the Coulomb potential V(r) = -
e2/(4πεor). By neglecting angular motions of the electron, we can rather simply determine the energy levels
of the atom:
me 4
En = # .
2(4!" o ) 2 h 2 n 2
Surprisingly, this is precisely what the simple Bohr theory of the H-atom predicts. The independence of the
energy eigenvalues on angular motions is analogous to the result from classical mechanics that the energy
of a planet or satellite under the influence of one other body’s gravitational force depends only on the major
axis of the elliptical orbit, not on its eccentricity. (It is the eccentricity which involves angular motions,
specifically angular momentum.)

3. The energy levels in the previous item can be determined by assuming that u(!) = F(!)e " ! / 2 (where
" = ! r and " 2 = !8mE/h 2 ). Then F(!) obeys the differential equation:

d 2F & 2 # dF & ) ( 1 #
+ $ ( 1! +$ !F = 0,
d' 2 $% ' !" d' $% ' !"
2me2
where $= . Assume F is a polynomial of degree k and substitute it into the d.e. The
4"# o !h 2
coefficient of the leading term in the differential equation is then! k -1 , and since this term is not canceled
by anything else on the left side of the equation, it must must be 0, and this means that ! = k + 1, k = 0,
1, 2,…, i.e., " ! n = 1,2,3,.... Since ! (which is in ! ) has E in it, solving for E gives the energy
levels.

4. Free particles. Free particles are bothersome, because strictly speaking, their wave functions, of the
ik( r" ! Et / h )
form # ( r , t ) = e , are not normalizable. It is convenient, then, to imagine “free” particles to be
in fact confined to a very large cubical box with sides of length L. In one dimension, we take this to be a
very long line of length L. In these cases, the normalized wave functions for are:

1 i(kx #Et / h ) 1 i(k$r #Et / h )


"(x,t) = e (1 # D), and for 3 - D, "(r,t) = e (3 # D),
L1/ 2 L3 / 2
where the 1-D case is appropriate for a particle moving in the +x direction. For the 1-D case, the
dProb v
probability current (see Chapter 1, item 8 above) is: j = = , where v is the velocity of the particle
! dt L
vt
and Prob is the probability that the particle leaves the box. Therefore, Prob = . Thus, when t = 0, Prob
L
= 0, since we assume the particle is initially in the box. After a time t = L/v, the particle must have left the
!it takes it this time to reach the right side and leave the box.)
box. (If it starts at the left end of the box,
Therefore, when t = L/v, Prob = 1. More often, we are interested in the flux of particles in a beam, e.g., the
number of particles per second passing a given point or impacting ! a target. This can be represented in one
i(kx #Et / h )
dimension by a wave function "(x,t) = Ae , where the flux is represented by the symbol A. It is
typical to take A = 1 for the incident beam wave function, since all we are interested in are ratios of fluxes.
(See next item.)

5. Scattering from!a potential step: Consider a beam of energy E = ħ2k2/2m coming towards a potential
step of height Vo<E. Some of the beam will be transmitted with energy ħ2k’2/2m as expected from
classical physics, with k’ = [2m(E – Vo)/ħ2]1/2. But some of the beam is also reflected, even though
6

classical physics cannot explain such a phenomenon. The reflection and transmission fractions, R and T,
are:
(k " k !) 2 4kk !
R= , T= .
(k + k !) 2 (k + k !) 2

6. Tunneling through a rectangular barrier. Consider a potential V(x) = Vo for 0 < x < a, and V(x) = 0
everywhere else. Classically, a beam with energy E < Vo would be totally reflected, but in quantum
mechanics, part of the beam tunnels through:
'1 1/ 2
& Vo 2 sinh 2 (a # & 2m(Vo ' E ) #
T = $1 + ! , ( =$ ! .
%$ 4 E (Vo ' E ) "! % h2 "

7. Consider a set of N non-interacting distinguishable particles, described by wave functions


! 1 ,! 2 ,...! N . The joint probability that #1 is in volume element d! 1 , #2 is in d! 2 ,…etc., is the product
2 2 2
of the separate particle probabilities: Prob = " 1 d!1 " 2 d! 2 ... " N d! N . We will get this result if
we take the combined wave function to be the product of the individual ones: " = ! 1! 2 ...! N . But if
the particles are indistinguishable, we must have a combined wave function which reflects this
indistinguishablity. Two cases arise. For some particles, called bosons (e.g., photons), the wave function
must be symmetric under an exchange of particle coordinates. For other particles, called fermions (e.g.,
electrons, protons, and neutrons), the wave function must be antisymmetric under an exchange of particle
coordinates. See Davies for simple examples of each for two particles.

CHAPTER 4: The formal rules of quantum mechanics

1. Ordinary functions as vectors in infinite-dimensional space. In ordinary geometrical space, we have


three dimensions, which we can label 1, 2, and 3 (usually corresponding to x,y, and z), so a vector in 3-
space can be written as A = (A1,A2,A3). The scalar or dot product between this A and another vector B =
(B1,B2,B3) is, as everyone knows, just A " B = A1 B1 , A2B2, A3 B3.

A function of x can be thought of as a vector in an infinite-dimensional “Hilbert space” with


the variable x playing the indexing role of “1,2,3” except that it can take on an infinite number of values.
To understand how this works,! consider first a “five-dimensional” space defined at x = 1.5, x = 2.2, x = 2.4,
x = 3.5, and x = 3.8, and a function f (x) = x2, defined over these values. Then everything we want to know
about this function is represented in the vector f = (2.25, 4.84, 5.76, 12.25, 14.44). Similarly, a function
g(x) = x1.5 is g = (1.837, 3.263, 3.718, 6.548, 7.408). And the dot product of f and g is just (2.25*1.837 +
…+ 14.44*7.408) = 228.5. Now imagine we define functions over not five values of x between 1 and 4 but
over a billion values, more or less equally spaced. Then f and g would be billion-component vectors, and
the scalar product between them would be very close to the integral of (x2)(x1.5) from 1 to 4. Going to the
limit of continuous x over the entire real line leads to a vector representation of the functions, which we can
just represent as (fx ) and (gx), or more conventionally as f(x) = x2 and g(x) = x1.5, with the scalar product
between x = a and x = b being:

b b
" a
f (x)g(x)dx = " a x 3.5 dx .

2. Two functions, f(x) and g(x), are considered mutually orthogonal (or just “orthogonal”) on the interval
b
!
[a,b] if ! f(x)g(x)dx = 0, with an analogous equation for three dimensions.. Usually, a = -∞ and b = +∞.
a

3. Solutions to the Schrödinger equation corresponding to different energies are mutually orthogonal:
7

!# m *# n dx = " mn , (A KEY EQUATION!)


a
with an analogous equation for three dimensions.
4. For a given physical system, the solutions (i.e., eigenfunctions) of the Schrödinger equation form a basis
(analogous to xˆ , yˆ , zˆ ), so that just as any three-dimensional vector r can be written as a sum of xˆ , yˆ , zˆ , so
can any arbitrary function f(x) be written as a sum of the solutions to the Schrödinger equation. Thus, ψ(x)
= Σcnψn, and the sum is in general, over an infinite number of eigenfunctions. Fourier analysis is a good
example of this for a free particle. The cn numbers are called expansion coefficients.
5. To find the expansion coefficients in ψ(x) = Σcnψn, we can use the orthogonal property of the ψn
eigenfunctions. Thus, if we want cm, multiply the equation by ψm*, integrate, and we get:
cm = ! ψm*ψdx, (A KEY EQUATION!)
with an analogous equation for three dimensions.

6. The probability that a measurement on a system in state ψ will yield the energy Em is:
Pmc‫ = ׀‬m‫׀‬2. (A KEY EQUATION!)
7. Collapse of the wave function: If a measurement shows that the system has energy En, then the system
is in the state ! n right after the measurement—i.e., the wave function collapses from ψ(x) = Σcnψn to ! n .

7. The time-independent Schrödinger Equation can be written in operator form Ĥ un = Enun, where
2 2 2
! ! !
Ĥ = -(h2/2m) ! 2 + V(r), and ! 2 = 2 + 2 + 2 .
!x !y !z

8. An eigenvalue equation has the form  ψ = αψ, where  is any operator and α is a number which in
general can be complex but in quantum mechanics must be real if it is to correspond to a measured
quantity. The function ψ is an eigenfunction of the operator  . So the eigenvalue equation shows a
special relationship between  and ψ: thinking of ψ as a vector, when  operates on it, it doesn’t change
its “direction”; it only changes its “length” by the factor α.

9. Any legitimate operator in quantum mechanics, i.e., any operator which corresponds to an observable
(like position or momentum) must be Hermetian, i.e., it must satisfy the following relationship for any two
arbitrary functions Ψ and Φ:

!#
"
( Aˆ $ )d% = ! $ ( Aˆ # ) " d% .
In other words, if we always assume the operator operates to the right in the above expression, Â can
operate on Ψ, but it can also operate on Φ if we change*
 to  .*

10. Another way to state the Hermetian relationship in the previous item is to think of  being able to
operate right or left, but if it goes to the left and operates on Φ , we must complex-conjugate  first.
*

This symmetry of going left or right, with everything on the left being understood to be complex-
conjugated, leads to Dirac’s elegant “bracket” notation, whereby the integrals in the previous item can both
be represented as:

#"
$
( Aˆ ! )d% = # ! ( Aˆ " ) $ d% = " Aˆ ! .
In this notation, ! is the “bra” part of the backet and ! is the “ket” part.
8

11. In the bra and ket notation, the expected value of an operator in a state ! is:
Aˆ = ! Aˆ ! . (KEY EQUATION)

12. The commutator, [ A ˆ , Bˆ ] , of two operators is defined as Aˆ Bˆ ! Bˆ Aˆ . Usually, we must operate on


some actual function to obtain the correct properties of the commutator.

13. If two operators commute, i.e., if their commutator is zero, then the operators can have simultaneous
eigenfunctions. If they don’t commute, then they cannot have simultaneous eigenfunctions.

14. Two physical observables can be simultaneously measured without any error if and only if their
operators commute. In other words, a measurement of one will not affect the measurement of the other if
their operators commute.

15. The operators for position and momentum do not commute: xˆ , p ˆ = ih , i.e., &$ x , ( ih ' #! = ih .
[ ]
% 'x "
Because they don’t commute, they can’t be simultaneously measured without disturbing each other.
Therefore, they obey the Heisenberg uncertainty principle. (See Chapter 1 Quantum Mantras.)

16. In general, for any two operators  and B̂ , which have eigenvalues α and β respectively, we define
2
the uncertainty of the measured values as Δα = Aˆ 2 ! Aˆ , and likewise for Δβ.

17. In general, for any two operators  and B̂ , which have eigenvalues α and β respectively,
1
ΔαΔβ >
2
[ Aˆ , Bˆ ] . This again implies that they are simultaneously measurable, i.e., ΔαΔβ = 0, only
if the operators commute. If the operators don’t commute, they obey the uncertainty principle.

18. The commutator of an operator with the Hamiltonian operator tells us what the average time rate of
! change of that operator and its observable are:

d ˆ " = i " [H
"A ˆ ,A
ˆ ]" . (A KEY EQUATION!)
dt h
In particular, if the commutator of an operator with Ĥ is zero, the observable corresponding to the
operator is conserved, i.e., it is constant in time.

CHAPTER 5: !
Angular Momentum

1. Classically, the angular momentum is L = r ! p, and Lx = yp z – zpy, etc. We obtain the quantum
! !
mechanical operator by substituting the operators for y, pz, etc., soLˆ x = y ("ih ) + zih , etc.
!z !y
[ ]
2. The operators for Lx, Ly, and Lz do not commute: Lˆ x , Lˆ y = ihLˆ z , etc. Therefore, these components
cannot be measured with perfect accuracy simultaneously. In a sense, a measurement of one interferes or
disturbs a measurement of the other.

3. L̂2 does commute with all three components of L. Since L̂ z is very simple in spherical coordinates
"
( Lˆ z = #ih
2
), we usually choose to work with simultaneous eigenfunctions of L̂ and L̂ z .
"!
9

1 # n"x &
un = cos%
a $ 2a ('
2
4. The eigenfunctions of L̂ and L̂ z are designated by the notation l, m , where l and m are the
! quantum numbers of L̂2 and L̂ z respectively. For well-behaved eigenfunctions (i.e., single-valued &
finite), we must have l a positive integer: l =0,1,2,3,…. For each l , m = - l , - l +1, - l +2,…-2, -1, 0, 1,
2
2,… l -2, l -1, l , for a total of 2 l +1 values of m . The eigenvalues of L̂ and L̂ z are, respectively,
l(l + 1)h 2 and mh .
m
5. In spherical coordinates, l, m = N lm Pl (cos" )e im! = Ylm , where N lm is the normalization
1/ 2
m& 2l + 1 (l ' m)!# m
constant: N lm = ('1) $ ! , and Pl (cos! ) is an “associated Legendre polynomial.
% 4( (l + m)!"
The entire wave function is the well-known spherical harmonic, Ylm .

6. The Ylm functions are orthonormal: l, m l' , m' = ! ll '! mm ' .

7. The Ylm functions can be used to construct matrices for L̂2 and L̂ z . For each value of l , the 2 l +1
values ofm define a 2 l +1 by 2 l +1 matrix representation for L̂2 and L̂ z . Therefore, for l =0, we have a
1x1 matrix (the sole element is 0); for l =1, a 3x3 matrix; for l =2, a 5x5 matrix; …etc. (Where are the
2
even matrices?? See below!) The matrices for L̂ and L̂ z are:

& 1 0 ... ... 0 # &l 0 .. .. 0 #


$ ! $ !
$ 0 1 ... ... 0 ! $0 l '1 0 .. 0 !
Lˆ2 = l(l + 1)h 2 $ : ... ... : ! , and L̂ z = h$ : 0 . .. 0 !.
$ ! $ !
$: ... 0 ! $ 0 ... ... ' l + 1 0 !
$ 0 0 ... ... 1 ! $0 0 .. .. ' l !"
% " %

Note that the matrices for L̂2 and L̂ z are both diagonal, reflecting the fact that they commute and hence
have common eigenfunctions when expressed in terms of the eigenvectors of L̂ z . Also, note that the
diagonal elements are their eigenvalues. A similar process allows matrices for L x and L̂ y to be obtained.
For l =1, the complete set of matrices are as follows.
&1 0 0# &1 0 0 # &0 1 0# &0 ' i 0 #
$ ! $ ! h $ ! h $ !
Lˆ = 2h $ 0 1 0 !,
2 2
Lˆ z = h$ 0 0 0 !, Lˆ x = $ 1 0 1 !, Lˆ y = $ i 0 ' i!
$0 0 1! $ 0 0 ' 1! 2$ ! 2$
% " % " %0 1 0" %0 i 0 !"

Note that the matrices matrices for L̂ x and L̂ y are not diagonal; the off-diagonal elements reflect the
fact that we have chosen to express everything in simultaneous eigenstates of L̂2 and L̂ z , but L̂ x and
10

L̂ y do not commute with L̂ z . Note also that all four matrices are Hermetian, which for a matrix
means that the complex conjugate of the transpose of the matrix leaves the matrix unchanged.

8. For a givern l , the most general eigenfunction is a 2l + 1 -component vector:


& a1 #
$ !
$ a2 !
l, m = C $ . ! ,
$ !
*
( *
l, m = C * a1 a 2 .. .. a 2 l +1
*
)
$ . !
$a !
% 2 l +1 "
where C is a normalization factor. (N.B.: the bra form is complex-conjugated!) The expectation of any
matrix operator  is just l, m A ˆ l, m .

9. The eigenvectors of L̂2 and L̂ z are just like l, m shown in #19, but with all zeros except for the
particular value of L̂ z which is measured. For example, if l =5, there are 10 eigenvalues of m , from +5
to -5. The eigenvector which will yield an L̂ z value of +3 will have a3 = 1 while all other a-values are
zero; the eigenvector for –5 will have a10 = 1 while all others are 0.

10. As we saw above, the r ! p type of momentum yields only integral values of l and square matrices
with an odd number of elements in a row or column. But not all angular momentum is of the r ! p type.
Intrinsic angular momentum, more commonly called spin, fills in the gaps with half-integral values of l .
For example, if l =1/2, we get a two-by-two matrix, meaning that only two values of L̂ z are possible.
That’s experimentally true for an electron. Any particle with a half-integral value of l is called a
“fermion,” while one with an integral value is called a “boson.” The crucial difference is that they obey
different types of statistics, but this will not be explored here.

11. For an electron, we designate its 2 by 2 matrices by the letter “S” to remind ourselves that we’re
dealing with “spin.” Because of the great importance of these matrices, they are called the “Pauli spin
matrices” after their formulator Wolfgang Pauli (1900-1958).
&1 0#1 &1 0# 1 &0 1# 1 &0 ' i#
S 2 = 34 h 2 $$ !!,
S z = h$$ !! , S x = h$$ !! , S y = h$$ !.
%0 1"2 %0 ' 1" 2 %1 0" 2 %i 0 !"
&a#
12. MEASUREMENT RULE: If a spin is in a state $$ !! , and a measurement of L̂ z is made, the state
%b"
&1# &0#
immediately changes to either $$ !! if the measurement yields +ħ/2 or $$ !! if the measurement yields
%0" %1"
–ħ/2. This is true in general. For example, if a system is in a state in which the probability of obtaining a
momentum between p̂ x and p̂ x +d p̂ x is some continuous function of p̂ x , then if a measurement of
momentum yields the magnitude px, the system immediately jumps into the wavefunction or eigenstate
corresponding to this value, i.e., the wave function immediately changes to e ip x x / h .

13. If the axisz ! makes an angle θ with the z axis, then the state corresponding to +ħ/2 along z, i.e., the
state ! , is : $ = cos(% / 2) $" # sin(% / 2) !" . Therefore, if a measurement puts the system in the

state ! , and then a measurement is made along z ! , the probability of finding the spin up along z ! is
11

cos 2 (! / 2) . The probability of finding the spin down along z ! is sin 2 (! / 2) . Similar results apply if
we start with a spin along – z or + z ! : ! = sin($ / 2) #" + cos($ / 2) !" ;
"# = cos($ / 2) " + sin($ / 2) ! ; !$ = # sin(% / 2) " + cos(% / 2) ! .

CHAPTER 7: Approximation methods

1. If Eno is the unperturbed energy of the non-degenerate nth energy state u no for the Hamiltonian Hˆ o ,
and En is the energy of this state when the Hamiltonian is Hˆ o + Hˆ ! ( Hˆ ! is the perturbation), then for
Hˆ ! << Hˆ o , the first-order shift in the energy of this state is:
$ 1 E n # E n " E no = u no Ĥ ! u no = H !nn .

2. To first order, the perturbed energy level, u n , is equal to the old non-degenerate energy level, u no ,
plus a superposition of smaller amounts of all other old energy levels (the old energy levels are, like Fourier
sines and cosines, a complete set and can represent any analytically continuous function):
H mn
$ u mo
u n = u no + ! .
m"n E no # E mo
(Note the importance of the nth level not being degenerate: if it were, then for m≠n, we would have Eno =
Emo and we'd have a blow-up!). Normally, we are primarily interested in Δ1En and not so interested in the
perturbed energy level u n , unless something else needs to be calculated.

3. If the nth energy level is degenerate, the preceding formulation for the energy levels will not work.
ˆ ! formed from the degenerate states must be formed. Then the
Instead, the matrix of the perturbation H
perturbed energies of the originally degenerate states are the eigenvalues of this matrix. To illustrate,
suppose the nth unperturbed level is two-fold degenerate, i.e., there are two linearly independent states,
ˆ!u
u no 1 and u m o 2 , with the same energies: Eno. First, we form the matrix u noi H noj for i,j = 1,2:
" H11! H12! %
$$ '' .
# H!21 H!22 &
Next, we find the eigenvalues, λ, of this matrix by solving the secular equation:
H11
! "# H12!
= 0.
H!21 H !22 " #
The two values of λ are the new energy eigenvalues for the pereturbed degenerate states. But it may
happen that the two values are the same, in which case we say that the degeneracy has not been removed or
"lifted" by the perturbation. Similar considerations apply to higher levels of degeneracy. For example, in
Problem 4 in this chapter, it is shown that an electric field (the Stark effect) splits the four-fold degenerate n
= 4 hydrogen level into only three states. (The two m=0 levels for l =0 and l =1 remain degenerate.) So in
this case, the degeneracy is partially lifted.

4. The variational method is designed to approximately determine the energy of the ground state of an
unperturbed system when the eigenfunction of the ground state is not known. In this technique, a trial
ground state wave function, ut(α,β,...; r) is guessed at. Here, α, β, ... are adjustable parameters and the task
is to find the best values for them. Since the true eigenstate gives the smallest value of the energy, the
technique is to calculate the expectation of H ˆ and then minimize it with respect to α, β,.... The success of
this technique arises from the fact that a rather poor guess of ut will usually give a surprisingly accurate
estimate of the ground state wave function.
12

For example, assume we have a one-dimensional system and ut has just two adjustable parameters:
ut(α,β;x). Then the variational recipe is as follows:

(1) Calculate E 0 (" , ! ) = u t Hˆ u t .

(2) Minimize this with respect to α and β:


! E 0 (" , # ) ! E 0 (" , # )
= 0, and = 0.
!" !"
(3) The two equations in step (2) can be used to solve for the minimizing values of α and β, call
them αmin and βmin.

(4) Now the minimizing values from step (3) are plugged into the average for E 0 in step 1:
E 0,estimate = E 0 (" min , ! min ) .
This estimate is always greater than or equal to the actual ground state energy. It cannot be less.

CHAPTER 8: Transitions
The following review includes a discussion of electromagnetic (EM) transitions, analogous to the simple
Bohr model in which EM radiation is produced by an electron undergoing a transition from one state to
another, or in which EM radiation induces a transition from one state to another. In the latter case, there are
two possibilities. Either the radiation pumps an electron from a lower to a higher state, i.e., the radiation is
absorbed, or the radiation induces a transition from a higher to a lower state, i.e., we have stimulated
emission which results in additional EM radiation. In all of these cases, the EM radiation will be treated as
a classical continuous electromagnetic field (which you are familiar with from Maxwell's equations),
because a full quantum mechanical treatment of the radiation field is beyond the scope of this course. In
other words, the treatment here is "semi-classical": classical for the EM radiation and quantum mechanical
for the atomic transition itself. (A few brief remarks on the full quantum mechanical treatment can be
found at the end of Section 11.) The discussion starts off with a general treatment of transitions due to a
time-dependent perturbation. Electromagnetic radiation will not be specifically discussed until Section 8.
So keep in mind that everything before that is wondrously general, particularly Fermi's Golden Rule, which
has a multitude of applications throughout modern physics.

1. TIME-DEPENDENT PERTURBATION THEORY. Consider a system with Hamiltonian Ĥ o and


iE no t
!
eigenvectors " no (r , t ) = uno (r ) e h
. The system is in eigenstate k and at t = 0 it is suddenly

subject to a small perturbation H !( r ,t ) . This changes the wave functions and energy levels, and the new

wave functions can be expanded in terms of the original wave functions:


iEnot
#
" n ( r ,t = ! an ( t ) u no ( r ) e h
.
13

In what follows, the subscript “o” will be dropped, since it will be understood that everything now refers to
the original unperturbed wavefunctions and energies. To a first-order approximation, the time-dependent
expansion coefficient, a m(t), is:

1 t ˆ i$ mk t E ! Ek
am (t ) = # H mk
" e dt , where $mk = m .
ih 0 h
2
Therefore, the probability of a transition from state k to state m is am (t) .
2. HARMONIC PERTURBATION. The most common type of a time-dependent perturbation is a
harmonic one which is turned on at t = 0: H "( r ,t ) = H ( r ) cos !t . This leads to:
!
H mk $e i(# mk "# )t "1 e i(# mk +# )t "1'
am = " & + ).
2h % # mk " # # mk + # (
We expect transitions to be significant when " # " mk , in which case the first term in the brackets is much
bigger than the second. The so-called rotating wave approximation consists in neglecting the second term
!
for " # " mk . In this case, the probability Pm ( !,t ) of making a transition in time t to a state m starting
!
from a state k with Em > Ek is, to first order, for a small perturbation:

! ( ( " ! " )t %
sin 2 & mk
2 2 #
2 H mk t &' 2 #$
P ( ",t ) = a = , with " = ( E ! E ) / h . (ABSORPTION)
m
4h 2
m 2 mk m k
( ( " ! " )t %
mk
& #
&' 2 #$

Since the system goes from a lower energy, Ek, to a higher one, Em, this represents absorption.
Please note th at the perturbation frequency ω is by definition positive, but ωmk can be positive or
negative. Specifically, ωmk is negative if E k > E m , in which case !mk " ! in the preceding

equation is replaced by !mk + ! . Th is represents stimulated emission (predicted by Einstein in 1917,

before quantum mechanics was developed in the mid-1920s!) in which the incident radia tion induces
a transition from the state with energy E m down to the lower energy E k . In this case,

2 # (" mk + " )t &


2 2 sin %$ ('
2 H mk t 2 (STIMULATED
Pm (" ,t) = a m = , with " mk = (Em ) Ek ) /h < 0. EMISSION)
2 2
4h # (" mk + " )t &
%$ 2
('

3. PERTURBATION FREQUENCY ! EQUAL TO !mk . Suppose tha t the frequency of a


perturbation precisely matches the energy difference between two levels: " = " mk . Then the
!
probability of making a transition to the state k grows with time. This probability can be
obta ined from the equation for Pm ( !,t ) by tak ing the limit as ω→ωmk, wh ich yields:
!
14

2
H mk t 2
Pm ( !mk ,t ) = .
4h 2

Please remember th at th is holds only for a small perturbation, i.e., since Pm (!mk , t ) increases
with time, it holds only for times sufficiently small so that Pm (!mk , t ) << 1.
4. EFFECT OF A LONG HARMONIC PERTURBATION, ! " !mk . For ω not equal to " mk , the
sine function part of Pm (",t) shows that it starts out at zero and then oscillates as a function of time:

!
!

Pm(ω,t) 2
FIGURE 8.4.1
H
mk

h2( ! " ! )2
mk

t = 2 " /(# mk $ # )

The lesson to learn from this figure is that you can force a system to jump to a state m from a state k, but if
you keep the perturbation on too long, the system will return to state k. Surprisingly, the return time,
!
t = 2" /(# mk $ # ) , is very long for " # " mk and is infinite for " = " mk . But the time to reach the
maximum, t = " /(# mk $ # ) , is also infinite if H mk is not zero! Of course, we assume the perturbation
doesn't last forever.
! ! !
5. DEPENDENCE OF Pm ( !,t ) ON " FOR A FIXED TIME. At a fixed time after the perturbation is
! !
turned on, Pm ( !,t ) has the following shape:

! FIGURE 8.5.1
Pm (! , t )
Probability of absorption as a function of frequency for fixed time

2
H t2 Area under entire
mk
2 2
4h ! H mk t
curve =
2h 2

"
" mk
!o
" mk + 2! / t
"mk # 2! / t
!
!
15

If the perturbation has a frequency " o , then the probability of a transition at time t is given by the
magnitude of the Pm ( !,t ) curve at that point. As shown in this diagram, !o is between the first and second
2

! H t2
mk
zeros to the left of !mk , so the probability is rather small compared to the maximum 2
.
4h
Note from the equation for Pm ( !,t ) that at t = 0, the sine function is identically zero, so the

Pm (" , 0) = 0 everywhere for any arbitrary ! . This is consistent with the graph of Pm ( !,t ) as a function of
t shown in Figure 8.5.1. This is quite reasonable, since we can't expect a perturbation to cause something to
happen immediately after it is turned on. Note also the first zeros to the left and right of the central
! resonance peak, which are separated from that peak by 2! / t . Therefore, as t increases, these zeros get
closer and closer to !mk , and for a fixed !o , these zeros sweep through this value of ! , again consistent
with the graph of Pm ( !,t ) vs. t. Simultaneously, as t increases, all of the peaks must grow in height,

because as the Pm ( !,t ) vs. t graph shows, by the time the very low peaks initially far from it sweep through

a given ! , they all have the same height! This is exactly consistent with Figure 8.4.1.
Figure 8.5.1 illustrates the energy-time uncertainty principle. Imagine that the energy
difference Em – Ek, or equivalently the frequency " mk , is unknown. To measure it, we subject the system
to a perturbation of frequency ! . If ! is sufficiently close to !mk , then the absorption will be sizable. If
it is far away, then the absorption will be small. But most of the probability for the transition is in the main
!
central peak, which covers a frequency interval [" mk # 2! / t , " + 2! / t ] . In other words, a perturbation

which is within this frequency interval has a good chance of inducing a transition from state k to state m.
We see that if our experiment lasts a time t, then a spread of input frequencies in that interval, i.e., an
energy range of h(" mk # 2 $ / t) to h(" mk + 2 $ / t) , will be strongly absorbed. Therefore, the spread of

energies strongly absorbed is "E = 4!h / t , which is also a measure of the uncertainty in the measured
value of the energy difference between the two levels. Thinking of the measuring time t as the
!
“uncertainty” in time, !t , we can cross-multiply by t " !t and we get #E#t = 4"h ! h , QED.
Why is the sky blue? An interesting consequence of the uncertainty principle involves scattering
of radiation from a molecule. If the radiation is close to a resonant frequency, i.e., if the energy of the
radiation is close to the energy difference between two molecular levels, then !E is small, and
"t = 4 #h /"E is large. On the other hand, if !E is large, then "t is small. But what is "t ? Assume
that there is absorption of the radiation energy by the molecule. Then there are two interpretations of "t ,
depending on if one is looking at the wave or particle aspect of the interaction between radiation and
! !
molecule. From the wave aspect, i.e., considering the radiation to be wave-like, !
"t is the time the
! of photons
molecule is subject to the perturbation. From the particle aspect in which the radiation consists
and we are considering absorption of a photojn, the absorption is for all practical means instantaneous, but
!
16

now "t represents the time the molecule holds on to the radiation. For small !E , the molecule holds
onto the radiation for a long time, long enough for collision with another molecule to occur. The collision
will typically “thermalize” the energy, i.e., the energy absorbed by the molecule is transformed into
! molecular kinetic energy, so the colliding molecules rebound with greater kinetic energy than they started
with. On the other hand, if !E is large, then the molecule holds onto the radiation for a shorter time, so
the radiation is can be quickly emitted. If emission happens before a collision occurs, then we have
scattering of the radiation. The small- !E situation helps tell the story of absorption of solar uv radiation
ozone in the stratosphere. Ozone, and also O2 and N2, have absorption peaks in the uv so the uv energy is
resonantly absorbed, a collision occurs, and the energy is safely thermalized. Blue light, on the other hand,
is farther removed from the uv resonant peak, so the large !E case holds, and it is scattered. Of course,
green and red light is also scattered, but they are farther removed from the resonant peak so they aren’t
absorbed and scattered as much as blue. In fact, the absorption is proportional to the inverse fourth power
of the wavelength, so blue is scattered about 16 times as much as red. That’s why we have a blue sky!
6. A FOURIER TRANSFORM INTERPRETATION OF Pm ( !,t ) .The graph of Pm ( !,t ) for fixed time

is precisely a graph of the Fourier transform of a cosine “segment” as shown in Figure 8.6.1. This is not a
coincidence. Assuming that a harmonic cosine perturbation in the form of an electromagnetic wave is
turned on at t = 0, then at time t it looks like the cosine segment shown in the figure. (The question mark
just indicates we don’t know what the future holds past time t—the perturbation might continue or it might
abruptly end.) If

we assume this segment is of the form cos !o t for a particular value of !o , then the Fourier transform of

the segment will be centered at !o .

H !( r , t )

? FIGURE 8.6.1
time
t
Figure 8.6.2 shows the Fourier transform of this segment. A comparison of this figure with Figure

8.5.1 shows that the probability of a transition, now given by the strength of the Fourier frequency at !mk ,

is exactly what Figure 8.5.1 shows it to be at !o . This is because the equation for Pm (",t) is symmetric
in !o " !mk , so it depends only the distance from the center, i.e., on !o " !mk . And as before, for

longer times, the longer the cosine segment will be and the more tightly its Fourier transform will be
!
located near the central frequency (in this case, !o ).
17

Pm (! , t ) Probability of absorption as a function of resonant absorption FIGURE 8.6.2


frequency for fixed time and fixed perturbing frequency

Area under entire


2 2 2
H mk t ! H mk t
4h 2 curve =
2h 2

! = resonant
!o !mk absorption
" o # 2$ / t " o + 2# / t frequency

7. FERMI’S “GOLDEN RULE.” Consider the absorption of light by atoms. Say they are all hydrogen
atoms. We are prone to think that the energy levels of one hydrogen atom are the same as for another
! !
hydrogen atom, but this is true only at absolute zero, i.e., for motionless atoms, with lots of space between
the atoms. If the atoms are warm, i.e., at any temperature above absolute zero, the levels and the
frequencies corresponding to differences between energy levels are Doppler-shifted because of the thermal
motion. Broadening of energy levels can be caused by a number of other factors, such as crowding the
atoms together (pressure broadening) or just the natural lifetime of the state which, by the uncertainty
principle, causes a smearing out of the energy level. Fermi’s "Golden Rule" leads to a simple result for the
absorption in cases where the broadening is sufficiently large.
To derive the Golden Rule, assume to begin with that we have only a few energy levels, say 4,
labeled m1, m2, m3, and m4, separated but not too much so, as shown in Figure 8.7.1 (a). We can think of
these as really distinct levels or just as an approximation to the result of broadening a single level. Also
shown is the initial state, Ek, and the energy of incident radiation, h!mk . This harmonic perturbation can

induce transitions from Ek to one of the four higher states.


Figure 8.7.1(b) shows the probability curves, Pmi ( !,t ) , for the four levels as a function of ! ,

with their bases placed for comparison at the energy levels shown in 8.7.1(a). (Note that the curves have
different heights, corresponding to different matrix elements for H mi k ( r ) ). The peak of Pmi ( !,t ) is at

Emi # Ek 2 2 2 2
" mi = . The incoming radiation of frequency !mk has probabilities a1 , a2 , a3 , and a4 , of
h
inducing a transition to each of the four excited states. Just looking at the curves, it is easy to see that the
probability that a transition to state 3 has the greatest probability. This is to be expected, since 8.7.1(a)
!
shows Em3 to be closest to h!mk . The probability that a transition will occur to any of the four excited
2 2 2 2
states is just the sum of the separate probabilities, i.e., a1 + a2 + a3 + a4 , with a1, a2, and a4 very

small compared to a3.


18

E
FIGURE 8.7.1
(a)
E, Pm (",t) E, Pm (",t)
E m4 (b)
E m3 (c)
E m 2! !
E m1
h!mk!
!
! Ek ! !
!mk !mk
! ! m1 !m 2 !m3 !m 4 ! m1 ! m 2 ! m3 ! m4

! Now suppose all of the levels have the same values of H m i k between states k and mi (i = 1,2,3,4).

Then all of the four probability curves are identical. As mentioned in the previous item, the probability of
radiation of frequency !mk being absorbed depends only on the difference !mk " !mi , so we can replace

the four probability curves by one centered on !km (and for convenience, placed right on the ! axis). This

is shown in Figure 8.7.1(c).


In practice, physical effects like Doppler broadening produce not just four or ten or twenty levels,
but a virtual continuum of energy levels, which we initially assume for convenience to be very large in
number but not infinite. This is depicted in Figure 8.7.2 (a), with the density, ! , of the “smear” of states

represented by the darkness of the shading. If there are dN states in the frequency interval [!, ! + d!],

dN
then "( ! ) is defined by the relationship "( ! ) = , i.e., the number of states in the interval is just
d!
dN = "( ! )d! . If we reasonably assume that the probability curves of all the states making up the

continuum are equal, then as before we can determine the probability of a transition by looking at the
overlap of the continuum with one single absorption curve centered on !mk . This is shown in Figure

8.7.2(b) where, in a small interval around h!mk , "( ! ) is approximately constant. Both Pm ( !,t ) and the

density of the states, "( ! ) , are plotted on the vertical axis.

The approximation which makes up Fermi’s Golden Rule consists in noting, first, that the
separation between adjacent energy levels, shown much expanded in Figure 8.7.1 above, is very small
compared to the width of the absorption curve, as suggested in Figure 8.7.2. And second, the density of
states doesn’t change very much across the range "! . So approximately, the number of states absorbing
significant amounts of energy is approximately $( #mk )!# " !N . The probability of a transition to any of

the !N states is simply the sum of the separate probabilities:


19

#N #N
2 2
Pr ob = $ ai "$ ( ai (1))
i=1 i=1

FIGURE 8.7.2

E (a) ! (b)
Pm ( ", t ), !
Em

"( ! ) "!

h!
!km "( !k )
P(" ,t)
Ek !
!k
! the sum is as follows. Let the separation between
The reason for writing the number (1) in each term of
adjacent states
! !i and !i+1 be "!i , which for a virtual continuum of states will be very small. Then the
1
density of states at !i is #i ( !i ) = since there is just one state in this frequency interval, i.e.,
"!i

#( !i )"!i = 1 . Now we can substitute #( !i )"!i for the number 1, and going to the limit of a continuum

of states, and taking the different "( !i ) all to be approximately "( !mk ) , we get:

'N 'N ( 2 (
2 2
Pr ob( t ) = % ai ( 1 ) = % ai #( "i )'"i & #( "mk ) ! a( " ) d" $ #( "mk ) ! P( ",t )d" .
i =1 i =1 )( )(

2
! H km t 2
But the last integral is just , where H km is the matrix element between state k and a typical state
2h 2
with an energy in the neighborhood of h!km . Therefore, we get:
2
" H km t
Pr ob(t) = # ($ mk ) .
2h 2
More usually, we’re interested in the “transition rate," which is the transition probability per unit time:

Pr ob( t )
w= . Since Prob(t) is proportional to t, w is constant:
t !
2
# H km
w= "( !mk ) . (FERMI’S GOLDEN RULE)
2h 2
8. THE DIPOLE APPROXIMATION AND BROADBAND RADIATION. Consider an atom bathed
in monochromatic electromagnetic radiation (i.e., radiation of single frequency, ! ) for which the
wavelength is much greater than the linear size of the atom. (As mentioned at the beginning of this review,
no attempt will be made to treat the radiation field quantum mechanically. A "semi-classical" treatment
will be sufficient.) Then at any instant of time the electric field, which we assume to be varying
20

harmonically, is constant over the atom. Therefore, Hˆ " = #er $ E o cos(%t) , so H ( r ) = "eE o ! r where r is

the coordinate of the electron (taking the origin conveniently to be at the nucleus of the atom, which is so
massive we can neglect its motion). For most electromagnetic radiations of interest, e.g., anything longer
!
than hard ultraviolet, the dipole approximation is excellent. It fails only when we get into the X-ray region.
An electromagnetic wave, by its very name, includes a magnetic component, B(t), too. The
perturbation H $ = #er " E o cos( !t ) ignores the magnetic part of the wave because in the dipole

approximation, it is mainly the electric field that affects the motion of the electron around the nucleus. This
can be appreciated by noting that, in this approximation, the field changes only slowly compared to the
time it takes the electron to react to the field, but by Faraday's law, the magnetic effect, which depends on
the (slow) rate of change of B, is small. In essence, then, at any instant of time, the atom is subject to a
quasi-static electric field which produces the change in potential energy given by H ! .
The equation for Pm ( !,t ) involves the matrix element H mk . In the present case, this is equal to

" e m Eo ! r k . Then the equation for Pm ( !,t ) is:

& ( ' ( ' )t #


2 2 2 sin 2 $ mk !
et m E ox x + E oy y + E oz z k %$ 2 "!
Pm ( ',t ) = .
4h 2
2
& ( ' ( ' )t #
$ mk !
%$ 2 "!

For plane-polarized light, it is convenient to take the direction of polarization along the z axis (Eo = Eo ẑ )
(and the dirction of propagation perpendicular to z), so we have only one integral to do in the matrix
element for Pm ( !,t ) .

Suppose again that the atom is bathed in monochromatic radiation but with random (isotropic)

polarizations and directions of propagation. Then E 2ox = E 2oy = E 2oz = E 2o /3 . Therefore, the matrix

E o2 2
element is just m r k , so:
3
!
& ( ' ( ' )t #
2 2 2 2 sin 2 $ mk !
e t Eo mrk $% 2 !"
! Pm ( ',t ) = .
12h 2
2
& ( ' ( ' )t #
$ mk !
$% 2 !"

Our result for Pm ( !,t ) is not realized in practice because radiation, even from a laser, is never

strictly monochromatic. Instead, we must integrate Pm ( !,t ) over " to obtain a frequency-independent
transition probability Pm (t) . As a first step towards obtaining Pm (t) , Eo2 can be expressed in terms of the

energy density of the wave, u( ! ) . For a single-frequency plane wave, u is the sum of the electric and
!

! !
21

!o E 2 B2 E
magnetic energy densities: u = + . But for a plane wave, B = , where c = speed of light.
2 2µ o c

1 2
Also, the speed of light is given by c = . Therefore, u = " o E 2 = " o Eo cos 2 ( !t ) . Averaged over a
µo!o
2
! o Eo
cycle of time, we get the average energy density u = . Solving for Eo2 and substituting into the
2
previous equation for Pm ( !,t ) gives:

& ( ' ( ' )t #


2 2 2 sin 2 $ mk !
e ut mrk %$ 2 "!
Pm ( ',t ) = .
6) o h 2 & ( 'mk ( ' )t #
2

$ !
$% 2 !"

Next, turning to the problem at hand of radiation covering a continuous range of frequencies, we
must suppose we have a differential of probability, dPm ( !,t ) , proportional to the differential of energy in

the frequency interval [!, ! + d!]. In other words, dPm ( !,t ) is the probability that radiation in that

frequency interval induces a transition. The differential of energy can be written as du(" ) = u1 (" )d" ,
which defines u1 (" ) as the radiation density (per unit volume per unit frequency interval). We can get the
transition probability from the previous equation by replacing u by u1 (" )d" and integrating over ! . (In
!
probability terms, we want the probability of a transition due to frequencies in the range [!, ! + d!], plus
!
the probability due to frequencies in the range [! + d!, ! + 2d!], …etc.) We assume the radiation density
!
covers a wide range of frequencies compared to the region over which Pm ( !,t ) is appreciable, as shown in

Figure 8.8.1. When the integration is carried, out, most of the contribution comes from the narrow region
around !mk , as suggested

u1 (" ) "! FIGURE 8.8.1

!
!
! mk

by the figure. In this narrow region, u1 (" ) is practically constant, of magnitude u1 (" mk ) , which can be
taken out of the integral. Taking the limits of the integral with respect to ! from 0 to +! is equivalent to

& ' ('#


taking the limits of the relevant variable in the integrand, $ mk !t , from +! to " ! , since " mk is
! % 2 "!

!
22

essentially infinite in the narrow region over which Pm ( !,t ) is appreciable. The result of integrating

$(" # " )t '


sin 2 & mk )
&% 2 )( 2!
2 is . Therefore, we end up with:
$ (" # " )t ' t
& mk )
&% 2 )(
2
"e 2 u1 (# mk ) m r k t
Pm (t) = = Bmk u1 (# mk )t ,
3$o h 2
!

which defines the constant Bmk . This constant will be met again in section 11. As with Fermi's Golden
! rate, w, which is constant;
Rule, this leads to a transition
Pm (t)
w= = Bmk u1 (" mk ) .
! t

Note that Davies chooses to express w in terms of the radiant intensity I(" ) # u1 (" )c , where c is the
speed of light. !
The foregoing result is really another form of Fermi's Golden Rule, which we should now
! to a continuum of states or if broad-band
understand to apply if narrow-band radiation causes a transition
radiation, essentially a continuum of radiation states, causes a transition to a narrow state.
9. SELECTION RULES FOR THE HYDROGEN ATOM. As the equations for Pm ( !,t ) and
2 2
Pm ( t ) show, they depend on the matrix element H mk ! m H(r ) k . For the hydrogen atom, m and k

each consist of four quantum numbers, n, l , m, and s, so we could write


2 2 2
H mk " n m l m mm sm H (r) n k l k mk sk , but with this understood, the simpler m H (r) k notation can

be used. Forgetting spin, this matrix


! element, which depends only on space variables, is very important

! because it is zero for the dominant dipole interaction H ( r ) = "eEo ! r !


except for certain values of m and k.

For example, it can be shown that transitions for any system with a spherically symmetric potential (e.g.,
the H atom or hydrogen-like atoms) must obey the following selection rules:
"l = ±1
"m = 0 for z polarization
"m = ±1 for polarization in the xy plane

As an application, consider the H-atom states 3,1,0 and 2 ,1,0 . Can a transition occur between these

states? No, because the selection


! rule on !l is violated (i.e., it didn’t change at all), even though the
selection rule on Δm is satisfied. What about a transition from 3,0 ,0 to 2 ,1,!1 ? This can occur for
23

radiation polarized in the xy plane because the conditions on both !l and Δm are satisfied (Δm = -1 and
!l = +1).
10. WHERE IS THE LORENTZIAN? The absorption curves so far are similar to the famous,
ubiquitous Lorentzian that appears in mechanics, optics, and nuclear radiation (in the Mössbauer effect),
but they have the oscillating wings which the Lorentzian doesn't have. This is because all of our results
have been for a truncated sinusoidal radiation, i.e., radiation that has a pure sinusoidal form for a finite

period of time. A pure Lorentzian does appear for a perturbation of the form Hˆ " = #erE o cos($t)e#%t ,
i.e., an exponentially damped sinusoidal electric field. (You might remember, we got a Lorentzian shape in
Davies, Problem 5, Chapter 1.) We expect such an exponentially damped classical field to result from an
! case of radiation being absorbed by a
atomic transition. The Lorentzian is also appropriate for the inverse
gas.
11. SPONTANEOUS EMISSION—EINSTEIN'S A AND B COEFFICIENTS.
In 1917, Einstein developed an argument based on statistical mechanics and thermal equilibrium
which related the spontaneous transition rate to the absorption rate Bmk , which was calculated above in
section 8. His great achievement was to do this before the development of quantum mechanics almost 10
years later. Consider a collection of identical atoms in an excited state n inside a cavity in which there is
black body radiation at temperature T. For simplicity, !
it is assumed that none of the levels are degenerate.
As Planck showed in 1900, the energy density of the radiation as a function of frequency is:

h" 3 1
u1 (" ) = 2 3 h" / kT ,
# c e $1
where k as usual is Boltzmann's constat. As in Section 8, u1 (" )d" = radiation energy per volume in the

frequency range ["," + d" ] . Let Amn be the probability per unit time that the atom decays
!
spontaneously from state n to a lower-energy state m, let Bmk u1 (" ) be (as in Section 8) the absorption
!
probability per unit time for a transition from state m to state n, and let Bkm u1 (" ) be the stimulated
! !
emission probability per unit time for a transition from state n to state m. In thermal equilibrium, let Nm =
!
the number of atoms in the lower state and N n = the number atoms in the excited state. Then thermal
!
equilibrium requires that the number of transitions from m to n equals the number of transitions from n to
m: !
! N m Bmn u1 (" ) = N n [ Anm + Bnm u1 (" )] .
Now we can solve for u1 (" ) :
Anm
! u1 (" ) = .
(N m /N n )Bmn # Bnm
!
But the Boltzmann equation gives:

N n e"E n / kT
! = "E m / kT = e"h# mn / kT .
Nm e

!
24

Therefore,
A nm
u1 (" ) = h" / kT
.
Bmn e # Bnm
Comparison with Planck's law shows that:
Bnm = Bmn ,
!
h" 3
Anm = Bmn
# 2c 3
Using the previous value of ! transition, A is:
Bmn for a dipole nm

e 2" 3 2
! Anm = m r n .
3#c 3$ 3 h
! !
Einstein provided a clever and accessible way to determine the spontaneous transition rate. A
! spontaneous transition in terms of the quantized
strictly quantum mechanical treatment explains
electromagnetic field. In this picture, the EM field is viewed as an infinite number of harmonic oscillators,
each oscillator corresponding to a radiation frequency. The amplitude of a harmonic oscillator mode with a
particular frequency is represented by the number of photons of that frequency. The elementary quantum
mechanical theory of the harmonic oscillator shows that the ground state does not have zero energy but
rather has an energy of h" /2 . This "zero point energy" is always there for every mode of the quantized
EM field, and this energy interacts with an electron in an excited state to "rattle" it out of that state, with the
consequence that it falls into a lower-energy state and in the process (a la Bohr’s original idea)
! emits radiation.
spontaneously

S-ar putea să vă placă și