Sunteți pe pagina 1din 125

Quantum Field Theory: Spring 2010

Prof. Dave Goldberg


June 1, 2012

Contents
0 Expectations and Notation

1 Worked Example: The SHO


1.1 The Lagrangian and the Equations of Motion .
1.2 The Classical Solution to the SHO . . . . . . .
1.3 Noethers Theorem: Part 1 . . . . . . . . . . .
1.4 SHO QHO . . . . . . . . . . . . . . . . . . .
1.5 Evolution of the Free Field Solution . . . . . .
1.6 The Heisenberg and Interaction Representation
1.6.1 The Heisenberg Representation . . . . .
1.6.2 The Interaction Representation . . . . .
1.6.3 The Interaction Unitary Operator . . .
1.7 Example: Perturbed QHO . . . . . . . . . . . .

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

2 Classical Free Fields


2.1 Natural Units . . . . . . . . . . . . . . . . . . . . .
2.2 The Lagrangian . . . . . . . . . . . . . . . . . . . .
2.3 Minimizing The Action . . . . . . . . . . . . . . .
2.4 The Klein-Gordan Equation . . . . . . . . . . . . .
2.5 What the Lagrangian means . . . . . . . . . . . . .
2.6 Noethers Theorem: Part 2 . . . . . . . . . . . . .
2.6.1 Displacements in space/time . . . . . . . .
2.6.2 Another Lagrangian, and another conserved
3 Free Quantized Scalar Fields
3.1 From Continuous to Quantized Field: The
Free Field . . . . . . . . . . . . . . . . . . .
3.2 The Creation and Annihilation Operators .
3.3 The Hamiltonian . . . . . . . . . . . . . . .
3.4 The Vacuum . . . . . . . . . . . . . . . . .
3.5 Operators and Observables . . . . . . . . .
3.6 Normalizing the field . . . . . . . . . . . . .
3.7 The Propagator . . . . . . . . . . . . . . . .
1

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

5
5
6
7
8
11
12
12
13
14
15

. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
current

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

17
17
18
20
21
22
22
25
26

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

28
Real-valued Scalar
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .

28
29
30
31
32
34
35

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

36
37
37
39
40

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

42
42
42
43
44
45
46
47
48
50
53

5 Scattering and Feynman Rules


5.1 The Propagator . . . . . . . . . . . . . . . . . . . . . . .
5.2 The Feynman Rules for the Scalar Yukawa Interaction .
5.3 Example: Scattering . . . . . . . . . . . . . . . .
5.3.1 Calculating the Amplitude . . . . . . . . . . . .
5.3.2 2 Particle Scattering Cross Sections (in general)
5.3.3 The cross section . . . . . . . . . . . . . .
5.4 Particle Interaction Energy . . . . . . . . . . . . . . . .
5.5 Example: Annihilation . . . . . . . . . . . . . . .
5.6 Example: Higher order corrections in decay . . . . . .
5.6.1 A First Stab at Renormalization . . . . . . . . .
5.7 Example: A Simple Mass Perturbation . . . . . . . . . .

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

55
55
58
60
60
61
63
64
66
69
72
73

Dirac Equation
1st order vs. Lorentz Invariance . . . . . . . . . . . . . . . . . .
Solutions to the Dirac Equation . . . . . . . . . . . . . . . . . .
What the Dirac Solutions Mean 1: Solves the Dirac Equation .
What the Dirac Solutions Mean 2: Orthogonality and Currents
6.4.1 The Adjoint Spinor . . . . . . . . . . . . . . . . . . . . .
6.4.2 Orthogonality . . . . . . . . . . . . . . . . . . . . . . . .
6.4.3 The Conserved Norm . . . . . . . . . . . . . . . . . . .
What the Dirac Solutions Mean 3: Operators and Transforms .
6.5.1 Operators: Momentum and Energy . . . . . . . . . . . .
6.5.2 Symmetry Operation: Charge Conjugation . . . . . . .
6.5.3 Symmetry Operation: Parity . . . . . . . . . . . . . . .
6.5.4 Operator: Spin . . . . . . . . . . . . . . . . . . . . . . .
6.5.5 Transform Operator: Boosts . . . . . . . . . . . . . . .
6.5.6 Transform Operator: Rotations . . . . . . . . . . . . . .

.
.
.
.
.
.
.
.
.
.
.
.
.
.

75
75
79
80
80
80
81
81
82
82
82
83
84
85
88

3.8
3.9

3.7.1 The Feynman Propagator . . . . . . . . . .


3.7.2 Evaluation with Complex Analysis . . . . .
3.7.3 Classical Field Relation to Greens Function
The Complex Scalar Field . . . . . . . . . . . . . .
Vector Fields (and beyond) . . . . . . . . . . . . .

4 A Simple Scalar Yukawa Interaction


4.1 3rd Order Lagrangians . . . . . . . . .
4.1.1 Significance of terms . . . . . .
4.1.2 The Perturbed Hamiltonian . .
4.2 Particle Decay . . . . . . . . . . . . .
4.2.1 The Interaction Hamiltonian .
4.2.2 The S-Matrix . . . . . . . . . .
4.2.3 Fermis Golden Rule . . . . . .
4.2.4 The Decay Amplitude . . . . .
4.2.5 Calculation of the Decay Rate
4.2.6 Lessons Learned so Far . . . .

6 The
6.1
6.2
6.3
6.4

6.5

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

6.6
6.7

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

90
91
91
91
92
92
93

7 Quantum Electrodynamics
7.1 Gauge Transformations and Symmetries . . . . . . .
7.2 U(1) Gauge Symmetry . . . . . . . . . . . . . . . . .
7.3 The vector potential . . . . . . . . . . . . . . . . . .
7.4 The 4-Potential and the Field . . . . . . . . . . . . .
7.5 The Dynamics of the Free-Field Potential . . . . . .
7.6 Lorentz and Coulomb Gauge . . . . . . . . . . . . .
7.6.1 Lorentz Gauge . . . . . . . . . . . . . . . . .
7.6.2 Coulomb Gauge . . . . . . . . . . . . . . . .
7.7 Solution to the Classical Free Electromagnetic Field
7.8 Quantizing the Photon Field . . . . . . . . . . . . .
7.8.1 The A operator . . . . . . . . . . . . . . . . .
7.8.2 The Hamiltonian . . . . . . . . . . . . . . . .
7.8.3 The Photon Propagator . . . . . . . . . . . .
7.8.4 EM Interaction Term . . . . . . . . . . . . .
7.9 Deriving the Feynman Rules . . . . . . . . . . . .
7.10 QED Rules . . . . . . . . . . . . . . . . . . . . . . .
7.11 Example: Electron-Electron Scattering . . . . . . . .
7.12 Example: Electron-Positron Annihilation . . . . . .
7.12.1 Simplifying the Annihilation . . . . . . . . .
7.13 Averaging over Spins . . . . . . . . . . . . . . . . . .

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

94
94
94
96
97
99
100
100
100
101
102
102
103
103
103
104
104
106
107
108
111

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

113
113
115
116
116
117
118
120
121

6.8
6.9

8 The
8.1
8.2
8.3

8.4
8.5

The Dirac Lagrangian . . . . . . .


Quantizing the Dirac Field . . . .
6.7.1 The Hamiltonian: Part 1 .
6.7.2 Anti-Commutator Relations
6.7.3 The Hamiltonian: Part 2 .
Fermi-Dirac Statistics . . . . . . .
The Fermi Propagator . . . . . . .

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

Electroweak Model
SU(2) Local Gauge Invariance . . . . . . . . . . . .
Spontaneous Symmetry Breaking . . . . . . . . . .
Electroweak Theory (But without the handedness)
8.3.1 The Electroweak Fields at Rest . . . . . . .
8.3.2 Symmetry Breaking in Electroweak theory .
8.3.3 The Higgs Mechanism . . . . . . . . . . . .
Handedness in the weak force . . . . . . . . . . . .
Quantized Weak Fields . . . . . . . . . . . . . . . .

9 Renormalization, Revisited

.
.
.
.
.
.
.
.

122

Expectations and Notation

I expect:
You should be comfortable enough with the material in the first lecture
that about 90% of it should be review.
If you are an undergrad, you should have taken (and passed with a B or
better in both) Quantum Mechanics I & II, as well as Classical Mechanics
I & II.
You should be familiar with SR. I dont expect youve taken SR, but you
should know about Lorentz boosts, for example.
Notation:
p is an operator, p is a variable or observable.
p~ represents a 3-vector, while p is a 4-vector (to be defined). p (or with
any other Greek letter) are the components of the 4-vector.
Ill be using Einstein summation convention. That is, if you see a b , it
implicitly means:
3
X
a b =
a b
=0

In other words, it sums to a scalar. Whenever you see pairs of indices


like this, you do the sum. Ill explain the significance of the upstairs and
downstairs indices when the time comes.

Worked Example: The SHO

Tong 2.1, 3.1


Gross Chapter 3.1
If you dont understand the Simple Harmonic Oscillator (SHO), then you
will not understand this course. Further, by working through the SHO, youll
know exactly how were going to introduce quantum fields.

1.1

The Lagrangian and the Equations of Motion

In freshman year, you learned about the Simple Harmonic Oscillator. Were
going to start by describing everything we know in the classical regime. No
quantum whatsoever, at least for a little bit. Youll know when quantum shows
up, because youll start seeing an h or ~.
We have a system with a Kinetic and potential energy:
K=

1
mx 2
2

1
m 2 x2
2
where we used the magical relationship:
r
k
=
m
U=

This yields a Lagrangian:


L=

1
1
mx 2 m 2 x2
2
2

(1)

We can compute the canonical momentum via:


p=

L
= mx
x

(2)

which, of course it is.


Finally, we get the Euler-Lagrange equations for a non-relativistic field with
one degree of freedom, which yields:


L
d L
=
(3)
dt x
x
= 2 x

(4)

1.2

The Classical Solution to the SHO

We know that the simple Harmonic Oscillator has a solution of the form:
1
x(t) = [c exp(it) + c exp(it)]
2

(5)

We have long since grown past the point where we need to talk about sines and
cosines. Were big boys and girls. However, because there is both a c and a c ,
the overall position is necessarily real.
Whats more, since c is complex, there are two variables, which uniquely
gives both the amplitude of the oscillation, and the phase.
Furthermore:
i
dx
= [c exp(it) c exp(it)]
v=
dt
2
This yields the classical energy:
K

1
m 2 cc
2
1
m 2 cc +
2



1
m 2 c2 exp(2it) + c2 exp(2it)
4


1
m 2 c2 exp(2it) + c2 exp(2it)
4

so:
E = m 2 cc

(6)

You should bear this in mind. We know that in quantum mechanics, the calculation of energy is very important. When we introduce the operator called the
Hamiltonian, its nothing more than the generator of the energy. We also know
that it plays a very important role in evolving the wave-function.
If you like, you can imagine that at some point in the future perhaps when
we introduce quantum mechanics, c and c (which are currently just numbers
albeit complex ones might be turned into something else. So, perhaps its
more appropriate to say:
m
1
~ (cc + c c)
(7)
~
2
Theres nothing wrong with what I did. Im allowed to do it, and you cant stop
me. However... my introduction of ~, as I did, should certainly raise some red
flags that were about to move into quantum mechanics. But clearly for now
theres nothing technically quantum about my choice.
If I define a dimensionless number, a, such that:
r
m
c
(8)
a=
h
E=

(which again, Im perfectly free to do), then we get:


E=

1
~(aa + a a)
2

in full generality.
6

(9)

1.3

Noethers Theorem: Part 1

When we get into QFT proper, Im going to skip a proof of Noethers theorem
(but it will appear in your notes in case youre curious). Basically, Noethers
theorem states:
If the action is invariant under some transformation, then there
is a conserved quantity for the system.
What conserved quantity?
Were going to deal (in this case) with a system of n degrees of freedom, qi .
QFT formally introduces an infinite number of degrees of freedom, which makes
things a bit complicated. Now, we know that for a properly minimized action,
we get:


L
d L
=
dt qi
qi
which is just the Euler-Lagrange equations.
However, we can imagine adjusting the Lagrangian by an amount dL by
varying the degrees of freedom. That is:
X L
L =
qi
qi

If we can adjust the system such that L remains fixed, then clearly the action
is fixed. Of course, the RHS of this equation looks a lot like the LHS of the
Euler-Lagrange equations. Thus:
"
#
dL
d X L dqi
=
(10)
dt i qi d
d

where is the parameter under which the Lagrangian doesnt explicitly depend.
For example, if were working in Cartesian coordinates, then the transformation:
xx+
represents a coordinate shift in the x-direction, and thus:
dx
=1
d
Pretty boring.
More interesting, perhaps, is a rotation around the z-axis:
x x cos + y sin
y y cos x sin

yielding:
dx
d
dy
d

= y
= x

Thus, the stuff in the parentheses is a conserved quantity.


For example, consider:
L=

1 X 2
qi V (q)
m
2
i

where the potential is assumed to be function of all possible (Cartesian) coordinates.


Consider, q = x. We get:
d
dV
[mx]
=
dt
dx
So if there is no explicit dependence of the potential in the x-direction, the
conserved quantity is the x-component of momentum.
Ill leave it as an exercise to show that if you vary, say, the azimuthal coordinate, , you get conservation of angular momentum in the z-direction.
There is a special case: conservation of the action over variations in time.
In that case, it is clear that the RHS of equation (10) becomes dL/dt, and thus
the entire equation may be combined to yield:
"
#
d X L
qi L = 0
(11)
dt i qi
You may recognize the bit in the hard brackets as the Hamiltonian, or equivalently, energy.
For our sample Lagrangian this yields:
H=

1 X 2
qi + V (q)
m
2
i

Of course.
Remember: energy and time invariance are very intimately related.

1.4

SHO QHO

In your undergraduate quantum class, when you went from classical to quantum
fields, you probably did so by directly solving the Schroedinger equation. If you
did that for a SHO, youd find the Hermite polynomials. Were not going to do
that here.
Instead, we note that in our new coordinates:
r
~ 1
[a exp(it) + a exp(it)]
x(t) =
m 2

1
p(t) = i m~ [a exp(it) a exp(it)]
2

and therefore, algebraically, we get:


"r
#
r
1
m
1
x(t) + i
p(t)
a(t) =
~
m~
2
"r
#
r
1
m
1
a (t) =
x(t) i
p(t)
~
m~
2

(12)
(13)
(14)

where weve absorbed the exponential term in our definition of a(t), and a (t).
Furthermore, since were clearly sliding into the quantum mechanical picture
anyway, lets go whole hog. For now, well choose the Schroedinger picture, since
its the one youre most familiar with. In that case, all of our observables become
operators on some wave-function:
p(t) p|(t)i = p|(t)i

(15)

where Ive gone all the way and used the Dirac bra and ket notation explicitly. Further, all operators in this system (Schroedinger Picture) are timeindependent. So we get:
"r
#
r
m
1
1
x
+i
p
a
=
~
m~
2
"r
#
r
1
m
1

a =
x
i
p
~
m~
2
where Ive done a switcheroo from to because Im now using operators
rather than numbers. Further, as you will recall from your QM course, we
almost always care only about commutation relations.
Youll recall
p~ = i~
(16)
in general and
in particular (where x =
Likewise,

p = i~x
for shorthand).
= ~x
~x

(17)

So:
[
x, p]

= x(i~x ) (i~x )x

= i~
or

[
x, p] = i~
9

(18)

Consequently:
1
i
i
1
(T ERM S...)[
x, x] + (T ERM S...)[
p, p] [
x, p] + [
p, x
]
2
2
2~
2~
1 1
= 0+0+ +
2 2
= 1

[
a, a
] =

or equivalently:
a
a
= a
a
+1

(19)

which was TOTALLY the entire point of this exercise. Now, I know youve seen
this before, but since were going to be using very similar results for some time,
its important that you absolutely get this.
Remember our energy (equation 9)? It now becomes an operator as well:

=
=


1 
~ a
a
+a
a

2 

1
~ a
a
+
2

(20)
(21)
(22)

What are the meanings of these operators? Well, I trust you already know that.
Theyre simply creation and annihilation (ladder) operators. We can even prove
it.
For instance, suppose we say that the system is in state, |ni with Eigenenergy ~N (n is not necessarily an integer). Equivalently, we have:
1
a
a
|ni = (N )|ni
2
Now consider:


1

H
a |ni = ~ a
a
+
a
|ni
2


1

|ni
= ~ a
a
a
+ a
2



 1

a
+1 + a
= ~ a
a
|ni
2


3
|ni
= ~ a
a
a
+ a
2


3

= ~ a
(N 1/2) + a
|ni
2
= ~(N + 1)
a |ni

10

Or equivalently (and throwing in the normalization for good measure),

a
|ni = n + 1|n + 1i

(23)

And where similar arguments show:


a
|ni =
And finally,

n|n 1i

=a
N
a

(24)
(25)

which is the number operator. Because of the square root bit, it n cant be
negative. So it turns out that the only definition which makes sense if for n = 0
to be the ground state (E0 = 1/2~, and all other states to be steps of integer
values such that:


1
~
(26)
En = n +
2

This is the magic of quantum mechanics, and this is a result that I expect
youve seen in Quantum I or II.
These step-up and step-down operators are going to turn out to be exactly
analogous to the creation and annihilation operators for particles in a field.
Hint of things to come:
In QFT, the square of the amplitude of a field is going to be
something like the number of particles.

1.5

Evolution of the Free Field Solution

Suppose we have a simple harmonic oscillator. In principle, we can describe it


at any instant by:
|i = cn |ni
where out of sheer laziness, Im generally going to omit explicit sums. This cn
is different that the cs we saw before.
Now the thing about quantum mechanics (and QFT) is that it is unitary,
which means that the state of the system tells you everything you need to know
about the future evolution. Or more specifically, we can define the evolution
via:
= H|i

i~|i
(27)
The time-dependent Schroedinger equation. Please note that we are going to
want a form that looks something like this for every system and field we encounter because it will tell us how a field will evolve.
For a single mode, cn is a single value, and we get:
i~cn |ni = En cn |ni
which is solved by


iEn t
cn (t) = cn (0) exp
~
11

In other words, if were in an energy Eigenstate, the free field solution says that
were going to stay there, and only change in phase.
In practice, what wed like to do is define a Unitary Operator (Evolution
operator, if you like):
!

iHt

(28)
U (t) = exp
~
such that:
and similarly

|f i = U(t)|ii
(t)
hf | = hi|U

where f and i denote the final and initial states of the system, respectively.
Get used to it. Well use that sort of shorthand a lot. Of course, both Unitarity
(and inspection) guarantee that:
= U
1
U

(29)

If youre confused about how we take the exponent of an operator, recall


that this can simply be re-written as a Taylor series:
(t) = I tH
+ 1 t2 H
H
...
U
2
For a system decomposed into eigenstates of the Hamiltonian, the Unitary
operator simply changes the phase of each of the cn coefficients.
In reality, though, QM tells us only about the Observables of a system. For
there is an observable O, such that:
any operator, O,
(t)O
U
(t)|ii
hO(t)i = hi|U
Remember: by definition you only get to measure the observables and eigenstates. Anything which leaves them unchanged is fair game. Which brings us
to...

1.6
1.6.1

The Heisenberg and Interaction Representation


The Heisenberg Representation

Nothing prevents us from saying that the operators change with time, and that
the states of the system remain constant. Indeed, this is the Heisenberg picture of quantum mechanics. The one weve been using so far is the Schroedinger
picture. Both are pretty simple so long as our basis states are eigen-functions
of the Hamiltonian.
Essentially:
H = U
O
U

O
(30)
And likewise:

|iH = U(t) |(t)iS

12

(31)

(t)|(0)i from the Schroedinger picture and makes


which cancels the |(t)i = U
the wave-function fixed.
H (t) = H (0)
The observables evolve exactly as the do in the Schroedinger picture.
Incidentally, since the Hamiltonian necessarily commutes with itself:
H = H
S
H
This works all well and good if were dealing with Eigen-functions of the
Hamiltonians, but suppose were not?
1.6.2

The Interaction Representation

Suppose instead that we have:


=H
0 + H
int (t)
H

(32)

0 , and then an interaction


We imagine that our system is in some Eigenstate of H
is introduced. What happens next?
Weve already seen (in the Schroedinger interpretation), a wave-function
evolves via:
(t, t0 )|ii
|f i = U
to arbitrarily allow a wavewhere Ive introduced the extra argument in U
function to evolve from t0 (rather than 0) to t. In general, Ill omit the explicit
reference to t unless I actually need it.
Now, what happens if we define:
I
U
0 U
U

(33)

isnt constructed simply from


Be careful! This isnt as trivial as it seems, and U
int and the reason should be clear. H
0 and H
int dont commute, and when
H
you expand out an exponential, youre going to get all sorts of combinations of
them.
In the Heisenberg representation, wed
But now consider some operator, O.
write it as:
=U
O(t
0 )U

O(t)

Where the O(t)


on the left is the Heisenberg version of the operator and on
the left, its the Schroedinger. Lets expand the unitary operator using, equation (33). We get:

O(t)
=
=


U

U
I 0 O(t0 )U0 UI
O
0 (t)U
I
U
I

0 (t) is the operator that you would have gotten from H


0 alone: the
where O
free-field version.

13

This is the Interaction Representation. Basically, this means that you find
the eigen-vectors and operators as if they were in the Heisenberg representation
in the free-field limit and then see how they change with time.
I will be the main challenge of pretty much everything were doSolving for U
ing. Naturally, once we have it, finding the evolution of a state in the interaction
picture will be quite straightforward (if difficult to implement):
I |ii
|f i = U

(34)

and if the interaction is only for a finite duration, this allows us to evolve system
through an interaction.
1.6.3

The Interaction Unitary Operator

But given some specified interaction, how do we calculate the interaction operator?
Ill forgo the algebra and point out that because it doesnt commute with
0 , the interaction term is going to have to be defined recursively. Importantly:
H
(t, t0 ) 6= exp(iH
int (t t0 )/~)
U
Ill simply give the answer (and then explain it). First, define the following:

I = U
H
H
0 int U0

(35)

which is just the interaction unitary operator written in the interaction representation.
(t, t0 ) = I i
U
~

t0

2 Z t

Z t
I (t ) + i
I (t )H
I (t ) + ... (36)
dt
dt H
dt H
~
t0
t0

The first term is easy to explain. No interaction means that a state is unchanged.
The second term is simple as well. Its simply the Taylor expansion over a short
period of time. The third is where things get confusing. The Issue is that t < t
according to the limits, so the Hamiltonians have to be applied in the order as
written.
Naturally, we could keep writing terms forever. This is the origin of the
fact that there are infinitely many Feynman diagrams to describe a process.
However, the further they are to the right (in this case), the less important they
are going to be.
To keep everything tidy, we can express the entire thing as:


Z t
I (t )
(t, t0 ) = T exp i
dt H
(37)
U
~ t0
where the T function basically means: at any given time, expand the whole
thing out and sort it so that all terms with the earliest time terms go furthest
to the right.
Its just a shorthand, but one which will be very useful, even once we move
into QFT.
14

1.7

Example: Perturbed QHO

Lets put this into practice. Consider for the moment a QHO. Well start it in
the ground state. This will be commonplace for a number of QFT calculations.
Then at t0 = 0, we start applying a force, F0 for a period , after which we
stop. Classically, the work done is thus F0 x, and thus:
r

F0
~

Hint =
a
+a

m
2
All Ive done here is expand out x
from our original definition.
I , so Ill give you the
Its straightforward (but a bit tedious) to compute H
answer:
r

~
F0

HI =
(38)
a
eit + a
eit
2 m

The form shouldnt surprise you.


So what about the Unitary operator itself?
The second term is a bit tougher, but still
The first term, of course, is I.
straightforward:
r
Z


F0
1 1
i t
a
eit 1 a
eit 1
dt HI (t ) =

~ t0
2 m~

and so on, with the 3rd term, which Ill leave as an exercise. I know, obnoxious,
right?
What happens if we apply this force on the ground state for a relatively
short time? Well, the time terms in the parentheses reduce, and we get:
r

F
1 1
0
I (t) I +
U
a
(it) a
(it)
2 m~
r
1
1

= I iF0 t
(
a+a
)
m~
2

So what is the probability of, say, pushing the oscillator to the first excited
state?
2
I |0i|2
P10 = S10
= |h1|U
As written, this is easy! After all, only the a
operator matters. So we get:
r
1
1
S10 = iF0 t
2 m~
or
P = F02 t2
What do we make of this?

15

1
2m~

Well, lets think about this classically. If we assume that F0 is small, and
t is much less than a period then classically. Basically, its as if we gave the
oscillator an impulse:
p = F0 t
Of course, this means that the total energy of the system is:
E = K =

p2
F 2 t2
= 0
2m
2m

But this is (by construction), a small number. For example, it is only:


E
F 2 t2
= 0
~
2m~
of the excitation energy.
Hot damn! This is exactly the quantum probability that we found. This is,
of course, a good thing. They should limit to the same value.

16

Classical Free Fields

Gross 1.1, 1.2, 2.1, 4.1, 8.1-8.2


Tong: 0.1, 1
Were dealing with fields in this class, not particles. Oh sure, at the end of
the day were going to have to relate what we learn about the fields to particle
behavior. But the name of the game will be to compute the properties of the field
first. Typically youve only ever done this with photons. But why? Electrons
(and quarks, and neutrinos and every other particle) should work the same way.
What is a field? A field is basically a scalar (or vector, or tensor) what
has potentially a different value at every point in space and time. A field is
not a wave-function, even though some of the same equations describe, say, a
relativistic scalar field, and a single relativistic quantized particle.
For one thing, when we quantize fields were going to realize that talking
about the wave-function of an electron is meaningless. Electrons and positrons
pop into and out of existence constantly. This is the curse and the beauty of a
special relativistic theory.
Secondly, in nonrelativistic QM, the wave-function itself was never measurable. It represented the square root of a probability, but couldnt, itself, be
measured. A field can be. A temperature field is a good example.
So heres what were going to do. Im going to start by describing the
properties of a field, and then were going to show how the dynamics fall out
naturally. To begin with, were going to imagine something very much like a
temperature field: a real valued scalar field. But before we get into it, were
going to have to simplify our notation somewhat.
Were going to take an unusual route for the next little bit. I am going to
concentrate on just giving you a flavor of how classical fields work, so that when
we start doing quantum fields, you wont be surprised. Because were doing this
in an unusual order, were going to skip around a bit in our texts. Dont worry.
Well get back to a linear progression in short order.

2.1

Natural Units

Throughout, were going to use natural units, meaning:


c=~=1
This may cause a bit of confusion, not because youre not smart enough to understand units, but because Ive already told you that were not using quantum
mechanics, and therefore you might not expect ~ to show up at all. It will, but
only when we want give some correspondence to actual particles.
Using natural units, all quantities can be expressed as energy to some power:
1

[m] = [E]

17

(39)

for example. This one should be obvious, since E = mc2 . Thus, for example, in
natural units, the mass of a proton would be 935M eV . Dont say, the energy...
Its the mass. Just in natural units.
We get a more complicated behavior when referring to length. Its worth
noting that just like in classical mechanics (in which we define a deBroglie
wavelength), we can define a Compton wavelength:
C =

~
mc

(40)

which has the correct dimensionality. The physical interpretation of the Compton wavelength is that it is the smallest scale on which a single particle can be
identified. On smaller scales, the energy goes up, and particles can be created
out of the vacuum. Moreover, it is clear that in natural units:
1

[L] = [E]

(41)

Thus large scales are low energies and vice-versa. To put things in perspective,
in natural units, 1
A is (1.970keV )1 . Note that this is much less than the mass
of an electron, and thus atoms are in the classical limit.
Finally, note that since L = cT (unit-wise), we immediately get:
1

[T ] = [E]

(42)

Of course, these units can be combined into all sorts of things. For example,
4
energy density is in units of [E] , speeds are dimensionless (fractions of c), and
so on.
You will need to master these units, and to assure that you do so, Ive
included an exercise in the first homework.

2.2

The Lagrangian

I know Ive said it before, but the wave-function below not the same thing as
weve seen in non-relativistic quantum mechanics. Sure, its an amplitude of a
wave, so in some way, 2 is going to give us some useful information, but it is
not true that | 2 |d3 ~x = P d3 ~x.
This is a classical field. You should be thinking about the electromagnetic
field, NOT the electron wave-function. All were saying for the moment is that
the wave is oscillating.
As it will turn out, the thing that we care most about when evaluating
classical (or quantum) fields is the Lagrangian density, L, which is defined in
such a way that the action:
Z
S d4 xL
(43)

is minimized.
Let me start with the simplest possible Lagrangian density, that of a realvalued scalar field.
18

I could just write this, or we could consider a 1-dimensional array of beads


(each of mass, m), connected by springs with unstretched length, from one
another, and where each bead is constrained to move in the y-direction. Further,
we can imagine looping this string of beads into a circle so that the system is
periodic.
Thus, the kinetic energy will be:
K=

X1
i

my i2

Thats the easy part. However, if we consider the vertical displacement of two
beads:
s = (yi+1 yi )
then the potential energy stored in the ring will be:
U=

1X
k(yi+1 yi )2
2 i

or (just guessing)
1X 2
yi
L=
2 i

dy
dx

2

Replacing y with , the value of the field, we might even suppose the following
in the continuous (and 3-d) limit:
L=

1 2 1
1
2
() m2 2
2
2
2

(44)

Where does this come from? Well, to some degree, I pulled it out of thin
air. However, we can say a few things about it.
All Lagrangians must be real valued, and be Lorentz-invariant scalars. Whats
more, in our special units, the Lagrangian density will be [E/l3 ] = [E]4 . Since
length has units of inverse energy, as written, our field, , has units of energy.
The m clearly stands for the mass of the particle, but Ive implied that we
have a continuous field. Dont worry about it. Well see how we get something
kind of like particles even before formally quantizing our fields.
We want to write everything in a Lorentz invariant way. That means that
everything should be a simple scalar. In relativity, components of a vector are
written like:
V
and where can take on the values 0,1,2,3 (t,x,y,z).
Ordinary derivatives are written as:
, =
Either notation will do. Im likely to use the latter. So for = 2, for example,
this is equal to d/dy.
19

There are two more rules you need to know. 1st is the rule for raising or
lowering indices. We can multiply by:

1 0
0
0
0 1 0
0

(45)
= =
0 0 1 0
0 0
0 1

The second rule is that whenever you see the same dummy index up top
and down below you add them all together. For example:
A B

=
=

A0 B0 + A1 B1 + A2 B2 + A3 B3
~ B
~
A0 B 0 A

So this (or the Minkowski metric, as its formally known) is basically a way
of taking dot products in special relativity.
Thus, our Lagrangian density becomes (and you can check my math on this):
L=

1
1
m2 2
2
2

(46)

This is a very general result. Youll soon see why were introducing all of this
in the first place, but essentially it is guaranteed to be Lorentz invariant. What
does that mean? It means that if I change my coordinates by applying a transform:

v 0 0
v 0 0

=
0
0 1 0
0
0 0 1

(a boost in the x-direction) or

1
0
0 cos
=
0 sin
0
0

0
sin
cos
0

0
0

0
1

(a rotation around the z-direction) then the Lagrangian at the rotated (or
boosted) coordinate will remain the same.
Im not going to prove this here. I will simply assert that if you multiply scalars or all of your upstairs vector indices are contracted with all of
your downstairs ones, then you are guaranteed to have a Lorentz-invariant
expression.
Ours does.

2.3

Minimizing The Action

Suppose you have some minimum action, S (remember that we expect action to
be minimized thats why we introduced it), parameterized by some continuous
field, (x).
20

If we add some small perturbation to the field, (x). We then get:




Z
L
L
4
+
( )
S = d x

( )
We can now use the equivalent of integration by parts where
v = ( )
and
u=

L
( )

so
v =
and
u =
Thus:
S =

L

d x

L
( )




L

( )

where the last term goes away at sufficiently large times and distances. Thus,
the terms in the square brackets must cancel for any parameterization of the
minimized action:


L
L
=

(47)
( )

Hurrah! Its the relativistic form of the Euler-Lagrange equations!

2.4

The Klein-Gordan Equation

Solving for our Lagrangian (equation 46) using the Euler-Lagrange equations
(47) we get:
+ m2 = 0
(and similarly with a star).
The first term is an operator known as the dAlembertian, and can more
simply be written as:
 + m2 = 0
(48)
The equation itself is known as the Klein-Gordan Equation, and if you
imagine the field as a rubber-sheet, you can see why the Lagrangian has the form
it does. More to the point, we get a homogeneous linear equation (because we
started with only second-order terms and then took derivatives), which means
that we can get lots of linearly independent solutions. Thats why we chose
this as our free-field solution in the first place. After all, we now have a wave
equation, which has the solution:
~

(x) = aei(k~xt) + a ei(k~xt)


21

(49)

which guaranteed to be real, and is satisfied if:


2 + k 2 + m2 = 0
Plugging in units:
(~)2 = (~kc)2 + (mc2 )2
So the m in the last term (particle) is offset by the ~s elsewhere.
This yields:
E 2 = (pc)2 + (mc2 )2
which should look familiar to you if youve ever done any SR.
Incidentally, we will always assume that is positive, and should the need
arise, well simply say .

2.5

What the Lagrangian means

What does the Lagrangian mean? Just as with a single particle Lagrangian, we
can define a conjugate momentum density:
=

L
( )

and for our K-G Lagrangian, we get:


=

L
=
( )

but because were smart, we know that the time component of the momentum
density (which well simply call ) will be important. After all, were ultimately
worried about time evolution.
Thus:
=
(50)
Even better, we can define a conserved current. This is a very generic statement of the field generalization of Noethers Theorem.

2.6

Noethers Theorem: Part 2

We showed for a single classical particle that if the Lagrangian (or the action,
generally) is invariant under some transformation then there is a conserved
value. Now, were going to do something more general. Noethers Theorem rely
states:
If a system has a continuously symmetric action, then there is a
corresponding conserved current.

22

Heres what I mean. When I first introduced Noethers Theorem for a single particle, I said that the Lagrangian needed to be invariant under a transformation.
Or, in other words:
L
=0

where is the parameter of the symmetry transformation.


This form isnt quite correct. Indeed, in the special cause of a time translation, this wasnt correct even for a single particle. After all, were really trying
to make the action invariant under a symmetry transformation. In that case,
for a single particle, were allowed to have a change in the Lagrangian so long
as:
L L + L
such that

dtL = 0

integrated over all time.


We can do a similar trick here. Lets imagine that we do a transformation
such that:
L = F
(51)
Notice that this is exactly equivalent to taking a divergence (only in 3+1 dimensions). Now consider:
Z
Z
4
S = d xL = d4 x F
The right hand side looks exactly like the input to Gausss theorem:
Z
Z
~
d3 x ~v = d2 x~v dA
s

where we make the general assumption that we are integrating over infinity, and
any relevant interactions will vanish. Thus:
Z
Z
d4 x F = d3 xF dA
which also vanishes.
In other words, if we can manipulate a transformation such that the change
in the Lagrangian takes the form above, were all set.
How can we change the Lagrangian? Well, consider a Lagrangian of many
fields, a (where each a represents a different field). In that case, suppose we
make a transformation:
a = Xa ()
(52)
where, like the function, F , X is to be determined by the transformation.

23

Now, lets consider the transformation:


L
L
a +
(a )
a
( a )


L
L
a +
=
(a )
( a )
( a )


L
a
=
( a )


L
Xa
=
( a )

L =
F

L
Xa F
( a )

= 0

Or, to put it another way, there is a conserved current such that:


j =

L
Xa () F ()
( a )

(53)

where there is an explicit sum over all the fields.


I should note what I mean by a conserved current:
j = 0

(54)

which is both Lorentz-invariant (note the matching indices!) and equivalent to:
j 0 + ~j = 0
Assume that j0 = is the density of the current. This simply means that:
Z


dV + ~j = 0

over any arbitrary volume. Thus we can define a charge (literal or otherwise)
such that:
Z
Q = dV

yielding:

Q +

dV ~j = 0

Again, we exploit the divergence theorem to get:


Z
Z
~ ~j = 0
dV ~j = dA
over a sufficiently large volume that encompasses all currents.
Thus:
Q = 0
That is what we mean by a conserved quantity! The total amount of stuff is
conserved.
24

2.6.1

Displacements in space/time

Consider the small following displacement in space-time:


x x
(x) (x) +
where the latter term is (and thus X()).
Likewise, the Lagrangian density transforms as:
L(x) L(x) + L(x)
and so:
F = L
The current as written before is:
j

L
Xa () F ()
( a )
L
a L
=
( a )


L
a L
=
( a )

= =

In fact, because I can choose any component of I like to be zero or otherwise,


this actually represents 4 currents. That is:
T =

L
a L
( a )

Where we have to sum over all of the independent fields in the theory. Fortunately, we only have 1, .
This 44 matrix is known as the stress-energy tensor T , which becomes
particularly interesting as the source term in general relativity.
~
Note that T 00 is the thing we normally call, , the density, while T i0 is J,
the momentum density.
So, for example, if we consider the energy-momentum 4-vector, T0 , we get:
a a 0 L
and thus for the Klein-Gordan equation, this becomes:
T 00

=
=
=

2 L


1
1 2 1
()2 m2 2
2
2
2
2
1 2 1
1
+ ()2 + m2 2
2
2
2

25

(55)

By direct inspection, we see that the kinetic energy density is:


K=

1 2

pretty much exactly what youd expect by inspection.


and the potential energy (normally broken into gradient density and
potential) is:
1
1
U = ()2 + m2 2
2
2
2.6.2

Another Lagrangian, and another conserved current

The stress-energy tensor is going to be a conserved current for almost any Lagrangian. Indeed, as written (and provided that the Lagrangian has no explicit
position or time dependence) the exact relation will be as derived.
Real-valued scalar fields are kind of boring because once youve got the S-E
tensor, youre pretty much done with conserved quantities. Not so in general.
Consider the Lagrangian density of the complex scalar field:
L = m2

(56)

By inspection, this relation is necessarily real. The trick to solving it in generality is to consider and as separate fields. Under those conditions, we get
two Euler-Lagrange equations:
+ m2
+ m2

= 0
= 0

which you know the solutions to, of course.


Well, besides time and space translation, what else keeps the Lagrangian
unchanged? How about a global shift in phase for our complex wave-functions?
(AGAIN, this is NOT quantum mechanics). Thus:

ei

ei

Clearly, a small change, << 1 yields:


= i
and
= i
The Lagrangian itself is real, so:
L = 0

26

and hence
F = 0
All fields need to be included, and thus we get:
j = i [ ]

(57)

This, as we will see, will be the electromagnetic current!


Hurray! In very little work, we already see the difference between complex
and real fields. A complex field has charge!

27

Free Quantized Scalar Fields

Gross 1.3-1.8, 4.1-4.5, 9.3


Tong 2.1-2.7
Its finally time to actually start doing Quantum Field Theory, but dont
worry, because of what weve done before, a lot of this will be completely natural.

3.1

From Continuous to Quantized Field: The Real-valued


Scalar Free Field

Remember how we went from a continuous to a quantized version of the harmonic oscillator? We first wrote down everything in the continuous limit, and
then realized that momentum and position dont commute:
[
x, p] = i~
or, in natural units, and generalizing to 3-dimensions:
[
xi , pj ] = iij
Note the delta-function. Thats because for a 3-d oscillator, there are 3 degrees
of freedom. For a continuous field, position is going to a coordinate, not a
degree of freedom. The value of the field plays the same role as position did in
the QHO case, and the canonical momentum plays the role of momentum. In
any event, there are now an infinite number of degrees of freedom, and we can
immediately write down:
h
i
a (~x, t),
b (~y , t) = i (3) (~x ~y)ab
(58)

For us, the last (Kronicker) isnt really relevant since there is only the 1-field,
but the point is that a field and its conjugate momentum dont commute but a
field and other fields, or other conjugate momenta do commute.
Youll no doubt recall the free-field solution to the classical Klein-Gordan
equation (49):
(x) = ap~ ei(~p~xEp~t) + ap~ ei(~p~xEp~t)
But all possible values of ~k are allowed! Since a) The field must be real, and b)
~ = 1, which means that ~k = ~p, we get:
Z

d3 p
1  i(~p~xEp~t)
i(~
p~
xEp
~ t)
p
(x) =
(59)
a
e
+
a
e
p
~
p
~
(2)3 2Ep~

The integral is done over all of p~-space, and here Ive decided (following Tong)
to attach the (2)3 to the complexreal transform, with no similar factor in
realcomplex.
28

Likewise, weve defined:


Ep~ = +

p
p~2 + m2

(60)

This is the most p


general real solution of (x). The only remaining confusion
might be where the 2Ep~ comes from. Dont worry about it for now. Well get
to it in good time. As written, theres nothing to stop us from normalizing the
integral however we want.
However, everything in the parentheses in equation (59) looks exactly like the
Quantum Harmonic Oscillator. Indeed! It is the same equation, and without
doing any more work, we can immediately write down the quantum version of
the field:
Z

1  i(~p~xt)
d3 p
i(~
p~
xEp
~ t)

p
(61)
a

e
+
a

e
(x) =
p
~
p
~
(2)3 2Ep~

Its now an operator acting on the state of the system. The system for this
purpose is an infinitely long vector which lists the number of excitations of each
of the infinitely many possible momenta of the field.
What this means, of course, is that a
p~ annihilates a particle of momentum,
p, while a
~
p~ creates one.
Using our commutator relation (or simple inspection), we can immediately
get the conjugate momentum operator on the field:
r
Z

Ep~  i(~p~xt)
d3 p
i(~
p~
xEp
~ t)
(62)
a

(x) = i
p
~
p
~
(2)3
2

Note that Ive written the field and momentum operators in the Heisenberg
or Interaction (same thing for a free field) representation. This differs somewhat
from Tong, who does it in the Schroedinger representation.

3.2

The Creation and Annihilation Operators

Using the commutation relation above:


h
i
x, t)),
(~
(~y , t) = i (3) (~x ~y)

we can quickly (albeit with a great deal of work) get the following commutation
relation:
i
h
p ~q)
(63)
a
p~ , a
q~ = (2)3 (3) (~
These operators play exactly the role you might expect. For instance, supposing
we started with a vacuum (and more on that in a bit):
|0i
then the operation:

a
p~ |0i
29

has the effect of creating a particle of momentum, p~. The a


p~ operator operations
as an annihilation term.
Likewise (appropriately normalized), the operation:
p~
p~ N
a p~ a
counts the number of particles with a particular momentum. Well talk more
about how to appropriately normalize these in a bit.

3.3

The Hamiltonian

Before we deal with the properties of individual particles, we need to deal with
the properties of the vacuum, and before we do that, we need to construct a
Hamiltonian. Weve already seen the energy density from equation (55).
Thus, we might suppose (correctly) that the Hamiltonian is:
Z
1
2 + m2 2

d3 x
2 + ()
H=
2
Nothing prevents us from taking a gradient of an operator. After all, the operator, itself, can be expanded as a Fourier series, as weve seen. Thus:
Z

d3 p
1 
i(~
i(~
p~
xt)
p~
xEp
~ t)

p
=
i~
p
a

i~
p
a

e
p
~
p
~
(2)3 2Ep~

All of the terms in the Hamiltonian are quadratic in the various operators. For
example, we could multiply out one of the terms as follows:
Z



RRR
1
d3 pd3 q
1 2
iq x
ip x
iq x
ip x
(x)

p
m
d3 x(x)
=
d3 x
a

e
+
a

e
a

e
+
a

e
q
~
p
~
q
~
p
~
2
(2)6
4Ep~ Eq~
h
RRR

1
d3 pd3 q
p
=
d3 x
a
p~ a
q~ ei(p +q )x + a
p~ a
q~ ei(p q )x
6
(2)
4Ep~ Eq~
i

+
ap~ a
q~ ei(p q )x + a
p~ a
q~ ei(p +q )x
The first and last of these terms are going to cancel with other terms, and
even if they didnt, it is clear that they wouldnt contribute in most situations.
After all, since the states of the field are presumed to be Eigenstates of the
Hamiltonian, two creation or two annihilation operators are necessarily going
to produce zero contribution:
hn|
ap~ a
q~|ni = 0
The middle two terms are the most interesting. Lets look at the second term:
Z Z Z

1
1 2
d3 pd3 q
p
m
d3 x
a
p~ a
q~ ei(p q )x
=
6
2
(2)
4Ep~ Eq~
Z Z 3 3
1
d pd q
p
(2)3 (~
p ~q)ei(Ep Eq )t a
p~ a
q~ =
(2)6
4Ep~ Eq~
Z
d3 p 1
1 2
m
a
p~ a
p~
2
(2)3 2Ep~
30

More generally, our Hamiltonian will be of the form:


Z
= d3 xd3 pd3 q
H
ap~ a
q~ (...) + ...
with each term quadratic in one of the dummy variables. By construction, it is
clear that ~
q = p~ will be the only terms to produce non-zero contributions.
While I wont duplicate Gross or Tongs derivation here (as its surprisingly
similar to our QHO), I will note the important relation:
Z
~
d3 xeik~x = (2)3 (~k)
(64)
This is one of the most useful and powerful relations we will see. Memorize it.
It allows us to get rid of all of the arguments except for 1, ~p, yielding:
Z
i
h
d3 p

= 1
H
a

+
a

E
p
~ p
~
p
~
~
p
~ p
2
(2)3


Z
d3 p
1

3
=
a

+
E
(2
)(0)
(65)
~
p
~
p
~ p
(2)3
2

3.4

The Vacuum

What happens when we try to measure the energy of the vacuum, |0i?
I warned you about this disaster before! The ground-state (vacuum energy)
is formally infinite. This isnt really a huge deal. After all, were integrating
over an infinite volume of space:
Z
1
E0 =
d3 p Ep~ (0)
2
Z Z

1
1
ei~x~p p~=0 Ep~
d3 pd3 x
=
3
2
(2)
Z
1 1
V
d3 pEp~
=
2 (2)3
Z
1
1
E0 =
d3 p Ep~
(2)3
2
which is directly measurable via the Casimir effect.
Of course, by this argument, even the energy density is infinite (since there
are an infinite number of modes). In reality, though, wed imagine that there is
a maximum momentum:
~
pmax =
lp
beyond which integration would be pointless. However, this yields an energy
density of lp4 = (1018 GeV )4 .

31

We can get rid of this completely (although in an ad hoc manner) via normal
ordering. That is:
:a
p~ a
p~ a
p~ := a
p~ a
p~ a
p~
which is to say, move all of the daggers to the left, and the lowering operators
to the right. Under this new scheme:
Z
d3 p
:=
:H
Ep~ a
p~ a
p~
(66)
(2)3
which doesnt blow up. Of course, it sets the vacuum energy to zero as well,
but thats the price we have to pay.

3.5

Operators and Observables

Number Operator
The Hamiltonian is just one operator. There are others, of course. For
instance, we may want to find out how many particles (excitations, really)
there are in the field. This operation is simple:
Z
d3 p

a
a
p~
(67)
N=
(2)3 p~

where the operator bit inside of the integral should look familiar. It was the operator that told us the state of the excitation of our simple Harmonic oscillator.
As an exercise, you can verify that this operator commutes with the Hamiltonian. Thus, particle number is conserved. This will not generally be true
in quantum fields. Our real-valued scalar field is an exception, not the rule.
Even complex scalar fields will not conserve particle number.
Momentum Operator
The momentum operator is based on the momentum density from the classical field. Recall the momentum density, T i0 (the element of the stress-energy
tensor)
T i0

L
i
(0 )
= i
= ii

So, generalizing to the quantum field, we get:


Z
P i = d3 x
i
" Z
#
r
Z


3
d
p
E
p
~

= d3 x i
a
p~ (t)ei(~p~x) a
(t)p~ ei(~p~x)
(2)3
2
s
"Z
#


1
d3 q
i(~
i
i(~
q ~
x)
q~
x)
(iq ) a
q~ (t)e
a
(t)q~ e
(2)3 2Ep~
Z
d3 p i
=
pa
p~ a
p~
(68)
(2)3
32

where the last step exploits the f unction relation:


Z
d3 xei(~q~p)~x = (2)3 (~
p ~q)
Of course, this is as youd expect. The total momentum of the system is
simply the total momentum of the particles.
But lets work out an example. Suppose we start with the vacuum and create
a particle of momentum, p~:
|i = a
p~ |0i
The momentum operator can be written:
Z
d3 q i
P i |i =
qa
q~a
q~ a
p~ |0i
(2)3
We can then use the commutation relation:
a
q~ a
p~ = a
p~ a
q~ + (2)3 (~
p ~q)
and thus:
P i |i

=
=

Z
d3 q i
d3 q i
*+0


a

|0i
q
a

qa
q~ (2)3 (~
p ~q)|0i
q
~
q
~ p
~
3
(2)
(2)3
d3 qq i a
q~ (~
p ~q)|0i

= pi a
p~ |0i

= pi |i

Just as you knew it must.


Angular Momentum
Finally, consider the following state:
a
p~=0 |0i
the creation of a particle with zero momentum.
Classically, the angular momentum density tensor should be:
J = x T 0 x T 0
Clearly, we only care about the space-like components (and by construction,
the tensor is anti-symmetric). Weve already seen that when we quantize the
system:
Z
d3 p i
T 0i
pa
p~ a
p~
(2)3

Doing no work at all, this is zero, and thus the intrinsic angular momentum of
our field is zero. You may know this is spin. Scalar Fields have no spin.
33

3.6

Normalizing the field

There is something non-Lorentz-invariant about what weve been doing. Essentially, all of our integrals have been over space. Consider the operation:
|~
pi = a
p~ |0i
which generates a single meson of momentum, p~.
This object is clearly not invariant. Apply a Lorentz boost, and all of a
sudden, weve got an entirely different state on our hands. Likewise, the integral
element:
Z
d3 p
cant be Lorentz invariant for the same reason. However, a couple of things are
clear. First, all states of the system must satisfy:
Ep~2 = ~p2 + m2
and thus, wherever integrals over momentum appear, the relation:
Z
d4 p(p20 p~2 m2 )
must appear. Ive used p0 rather than Ep~ as a variable. Obviously, the argument
of the -function will be zero where p0 = Ep~ .
I remind you of a standard relation for dirac delta functions.
(f (x)) =

(x x0 )
f (x)|x=x0

where in this case:


f (p0 ) = p20 ~p2 m2
and hence:
(p20 p~2 m2 ) =

(p0 Ep~ )
2Ep~

and thus:
Z

d4 p(p20 p~2 m2 ) =
=

d3 pdp0
d3 p

(p0 Ep~ )
2Ep~

1
2Ep~

The LHS is Lorentz invariant, and thus, so is the right.


Since our eigenstates, |~
pi form a complete basis set, we know that:
Z
d3 p
|~
pih~
p| = 1
(2)3
34

(69)

This can be re-written


Z

p
d3 p p
2Ep~ |~pi 2Ep~ h~
p| = 1
3
(2) 2Ep~

which, of course, remains true. However, now the integral element is Lorentz
invariant, and so is the wave-function.
Thus, we define:
p
|pi = 2Ep~ a
p~ |0i
(70)

as the relativistically normalized wave-function.


This explains where we got the factor in the original quantum operator for

3.7

The Propagator

Expectation values provide a useful connection between quantum and classical


mechanics. In particular, for some state of the system:

hOi = hn|O|ni
But lets be more precise. Suppose I had a particular state (say the ground
state), and I wanted to measure the expected value of the field:
Z

1  i(~p~xt)
d3 p
i(~
p~
xEp
~ t)

p
a

e
+
a

e
(x)
=
p
~
p
~
(2)3 2Ep~

It takes very little algebra to show:

=0
hi = h0||0i
Not surprisingly, the expectation of the field is zero.
What happens when we ask the question about the expectation of the square
of the field? We dont expect this to vanish, since we know that in the vacuum,
modes percolate constantly.
We can even make this general, by asking, what happens if we make a
perturbation at some space-time coordinate, y, and then try to observe it at,
x? Are the fields correlated?
(y)|0i

h(x)(y)i = h0|(x)
Put another way, suppose we set up a source at some point in space and time,
what are the odds that well observe it at another? Presumably, this should
only be possible if the two events are time-like separated, right?

35

3.7.1

The Feynman Propagator

Well...
(y)|0i

h0|(x)
=

Z Z


1
d3 pd3 p
q
h0|ap~ ap~ |0ieip x +ip x
(2)6 2 Ep~ E ~

d p 1 ip (x y )
e
(2)3 2Ep~
D(x y)

(71)
(72)

Of course, in the first line, we could have included all 4 combinations of a


and
a
, but the only non canceling ones are the ones written.
This thing, D(x y), is known as the propagator, and those of you whove
taken E&M may find it oddly reminiscent of the Greens function.
Were going to introduce something that may seem like something of a cheat:
(y)|0i

F (x y) = h0|T(x)

(73)

This is almost identical to our original propagator but with one subtle difference. Because weve introduced the time ordering operator, the Feynman
Propagator (as its known) has the property that:

D(x y) x0 > y 0
F (x y) =
D(y x) y 0 > x0
How do we express this propagator in a useful way? First, lets make it
Lorentz invariant. Ill claim (as so many have before me) that the Feynman
propagator can be expressed as:
Z

d4 p
1
F (x y) = i
eip (x y )
(74)
4

2
(2) p p m
If youve mastered your Lorentz algebra, youll recognize that the stuff in the
denominator should blow up. After all, its really p20 ~
p2 m2 , which is supposed
to be zero. In fact, we can rewrite the denominator in the following way:
p p m2 = p20 Ep~2 = (p0 Ep~ )(p0 + Ep~ )
This sort of expression is our first hint of antiparticles. After all, weve got just
as much problem with the negative sign on the energy as the positive sign on
the energy.
For convenience, for the rest of the analysis, I will assume that x0 > y 0 .
To show that our propagator works, we need to integrate over p0 :
Z
Z
0
0
i
d3 p i~p(~x~y) dp0
e
eip0 (x y )
3
(2)
~ )(p0 + Ep
~)
(2) (p0 Ep

36

3.7.2

Evaluation with Complex Analysis

The integral over p0 looks impossible. We have two roots, at Ep~ and Ep~ . But
complex analysis gives us a trick. First, introduce a slight change:
Z
0
0
i
dp0
eip0 (x y )
(2)
(p

E
+
i)(p
+
E

i)
0
0
p
~
p
~

And do a semi-infinite semi-circle integral from p0 = to , and then around


the complex part of the plane.
In complex analysis the Cauchy Residue Theorem to be exact we learn
that if you do a counter-clockwise integral around a root:
Z
dz
f (z) = 2if (z0 )
z z0
Since our integral is counter-clockwise, we get a negative sign, and thus:
Z
0
0
dp0
i
eip0 (x y ) =
(2)
(p

E
+
i)(p
+
E

i)
0
0
p
~
p
~

(75)

From the contour, the point, p0 = Ep~ i is inside the contour. So the entire
1-d integral becomes:
0
0
1
ei(Ep~ i)(x y )
2Ep~ 2i
and thus, in the limit of 0, we get equation (71):
Z
d3 p 1 ip (x y )
e
F (x y) =
(2)3 2Ep~

(76)

In practice, though, we can redefine our terms somewhat (but get exactly
the same result by defining):
Z

d4 p
i
F (x y) =
eip (x y )
(77)
(2)4 p p m2 + i
We are going to see that the Feynman propagator (77) is going to be very useful
to us in computing real calculations in the future.
3.7.3

Classical Field Relation to Greens Function

Finally, I note that while I derived the propagator in terms of QFT, it has application in classical (relativistic) field theories as well. Consider the operation:
Z

d4 p
i
2
( + m )F (x y) =
(p p + m2 )
eip (x y )
4

2
(2) p p m + i
Z
d4 p ip (x y )
e
= i
(2)4
= = i(x y)
37

To take a particularly simple example, imagine two interacting electrons,


that are moving around at non-relativistic speeds or sitting still entirely. Suppose that the force between them is mediated by a massive, spinless, bosonic
field. In that case, the energy of interaction will be governed by, F , such that
U F
How does it scale with distance? We start with the Greens function relation:

 + m2 F = i(x)

where Im putting the source particle at the origin for convenience. In the
time static case, this expression simplifies to:

2 + m2 F = i(~x)

where by symmetry

F = F (r)
Recall a couple of useful relations:
 
1
2

= 4(~x)
r
and also

1 d2 (rf )
r dr2
Thus, multiplying both sides by r, we get:
2 f (r) =

d2 (rF )
m2 (rF ) = 0
dr2
which solves to:

emr
r
where when you include the time propagation effects, only the minus sogn makes
sense. This ultimately yields:
emr
(78)
U
r
which for a massless mediator produces a standard 1/r potential, and drops off
more quickly otherwise.
This is another way of saying that if you have a source that is mediated
by a massive particle, the classical field will propagate in a way described by
the Klein-Gordan equation. Or, in other words, we have a source term in our
Lagrangian:
1
1
L = m2 2
2
2
then we can determine the field generated by a particular source:
Z
(x) = d4 yF (x y)(y)
F =

You create a current (or source) in some point in spacetime, and you observe
the field oscillating elsewhere.
38

3.8

The Complex Scalar Field

Id like to finish up this section by pointing out how our approaches to quantizing fields an be generalized. For the most part, were going to have free-field
Lagrangians which are Lorentz invariant, and which are also quadratic in the
source fields. This means that the E-L equations are going to be linear in the
source fields.
Lets consider the complex scalar field:
L = m2

(79)

We saw this before in 2.6.2.


We also saw that and can be treated as two separate fields for the
purpose of the Euler-Lagrange equations, and that they produced two KleinGordan equations:
+ m2 = 0
and similarly with the asterisks. Thus, the solutions (for the classical field) are:
Z

d3 p
1
p
(x) =
bp~ ei~p~x + cp~ ei~p~x
3
(2)
2Ep~
Z
3

1
d p
p
b ei~p~x + cp~ ei~p~x
(x) =
(2)3 2Ep~ p~

Where the second solution is clearly the complex conjugate of the first. Everything in this otherwise looks identical to our real field.
Our conversion to operators is trivial:
Z

d3 p
1  i~p~x
i~
p~
x

p
=
(80)
b
e
+
c

e
p
~
p
~
(2)3 2Ep~
Z

1  i~p~x
d3 p
p
(81)
=
bp~ e
+ cp~ ei~p~x
3
(2)
2Ep~

Whats this? Well, the c operators clearly create a particle, and b clearly
creates the anti-particles (and without daggers, they are annihilators. Combinations of terms always yield combinations of bb , or b c , and so on.
This makes sense. In one case, we get a scattering term. In another, we produce particle-antiparticle pair. In fact, all 4 combinations produce reasonable
interpretations.
Since weve already done all of the work with the real-valued scalar field,
I wont repeat it here. However, I should at least write down the relevant
commutation relations:
i
i h
i h
i h
h
(82)
cp , bq = cp , bq = cp , bq = cp , bq = 0
and

i
 h
cp , cq = bp , bq = (2)3 (~
p ~q)
39

(83)

just like we had with the field.


Finally, using the relation of conserved current we saw before, we can define
a total charge:
Z
d3 p
Q=
(c cp~ bp~ bp~ ) = NC Nb
(2)3 p~

3.9

Vector Fields (and beyond)

Were not ready to do electrons yet. Spin 1/2 particles dont have a classical
analog. However, spin-1 particles (aka photons) do. Instead of a single scalar
field, the field is components of a vector, A . Ill spare you the suspense, and
~
let you know that A0 = , the scalar potential of the E&M field, and Ai = A,
the vector potential.
In general, we might imagine being able to construct a Lagrangian with the
following sort of terms: A A , or A A . Remember that these terms are
supposed to be quadratic in A for free-field theories. As it turns out, to make
our result consistent with Maxwells equations, the Lagrangian density becomes:
1
L = F F
4

(84)

F A A

(85)

where
This matrix (the Faraday matrix) has zeroes in the diagonal, and is antisymmetric. As such, there are only 6 independent terms. Those 6 are the 3
components of the E-field, and 3 of the B-field. Well talk about that later.
For the free classical field, we can solve the Euler-Lagrange equations as:
F = 0
Using the definition of F , this can be satisfied if:
X Z d3 p

1
~
p~
x

p
~~
x
p
A (x) =
p
+ p
~, e
~ ap
~ ap
~, e
3
(2)
2Ep~
n,

(86)

where represents one of two possible polarization states. Conversion to a


quantized field operator should be fairly obvious, but well talk about it more
when we get to QED.
Those p
~ vectors components are important. Lets suppose (for convenience), that ~
p points in one of the 3-principle directions, call it i. can take
on the values 1,2,3. What we get out of the is a unit vector only if i 6= . We
use the Levi-Civita symbols:

i, j, k = cyclic
+1
1 i, j, k = anticyclic
ijk =

0
otherwise
40

In our notation, we care about the term i . Dont worry what happens (for
now) when = 0.
It is clear that if we had a massive photon (massive so that it has a rest
frame), and created a single zero-momentum particle, these operators would
guarantee an integer spin. (See Gross 2.7).
I should point out that, in principle, we could consider free QFTs of any
integer spin. The theory for a spin-2 particle, for example, would be the Lagrangian of a graviton. However, these theories will suffice for now.

41

A Simple Scalar Yukawa Interaction

Gross: Chapters 3, 9.1-9.2


Tong: Chapter 3
We are finally ready to start consider nearly real physical interactions. I say,
nearly real, because it is clear that that systems that will most interest us in
the real universe are those featuring interactions of spin 1/2 particles (fermions)
with spin 1 particles (bosons). Still, we will get a very good impression of
how cross-sections and decay rates are computed from our spin-0 view of the
universe.

4.1
4.1.1

3rd Order Lagrangians


Significance of terms

Every Lagrangian weve seen so far is second order in or . This has resulted in
linear classical equations of motion, which in turn have resulted in our free-field
theory. However, there is nothing to prevent perturbations to this Lagrangian
of the form:
1
1
Lint = 3 3 4 4 + ...
3!
4!
or even cross terms along the lines of:
Lint =

1
3
3!

(87)

where Ive included the last as a numbered equation, because were going to examine it in great detail. It strongly resembles the interaction Lagrangian for an
electron field with a photon field. Dont worry, for now, about what constitutes
a legitimate Lagrangian. Well derive those from symmetry arguments in due
course.
An analysis of the various terms indicates something:
[3 ] = E 1 , which means at high energies, the term is likely to be insignificant. Terms like these are known as relevant.
[4 ] = E 0 are known as marginal, and their impact is not energy dependent.
[5 ] = E 1 , are known as irrelevant, for the simple reason that at low
energies, they do not contribute to the Lagrangian, and only kick in at
higher energies. Irrelevant terms are typically very difficult to deal with,
and we wont in this course.

42

4.1.2

The Perturbed Hamiltonian

Lets take as our example the perturbed Hamiltonian, equation (87). It should
be obvious that since this is a potential energy term, we can simply read off the
interaction Hamiltonian:
int = 1 3
H
(88)
3!
This is one of the very nice things about our Lagrangian picture, the interaction
term drops out trivially. Or, more generally:
Z

Hint = g d3 x
(89)
Note that in this case:

[g] = [E]1

Note also, that I havent actually motivated why we might find ourselves with
an interaction term like this. Dont worry, we will, but itll have to wait a few
weeks.
contains a series of interaction Hamiltonians,
Since the unitary operator, U
its pretty clear that were going to get a bunch of s acting on our system.
Well expand everything out properly in a bit, but you should recall that
each of the three operators in the Hamiltonian is a combination of creation and
annihilation operators, which means that we have the following 8 combinations:
1. a
p~bp~ bp~
2. a
p~bp~ cp~
3. a
p~ cp~ bp~
4. a
p~ cp~ cp~
5. a
p~bp~ bp~
6. a
p~bp~ cp~
7. a
p~ cp~ bp~
8. a
p~ cp~ cp~
Lets give the particles names. The particle created by the a
operator is my

-particle , the particle created by c a particle, and the one created by a b


a particle.
Interaction 1 absorbs a into an and gives it a boost. One could, if one
were so inclined, even draw a picture representing the term:

43

For those of you who missed it, this is a proto-Feynman diagram. For now,
whenever I draw one, a solid line means a particle, and a going back
in time is a . A dashed line is a particle. Different sources use different
conventions, but I prefer my time axis running vertically. It makes it look like
a space-time diagram. Be aware, however, that the x-axis is meaningless.
Note also that this particular diagram wont represent a process that can
actually happen. There is no way to absorb a single particle without a change
in identity and simultaneously conserve both momentum and energy.
The diagram for 3 is particularly interesting:

A decays into a + pair!


And you can figure out by looking at the other 6 combinations what the
relevant processes are in each. Notice that charge (N + N ) is conserved in
every process.
We could, of course, write down a similar set of terms for any possible
interaction Lagrangian, and they will all have similar expressions.
How do we use these observations to do actual calculations?

4.2

Particle Decay

Rather than give you a bunch of rules to simply memorize, were going to start
by exploring decay as a worked example to motivate what comes next. In the
process, were going to work through a fair amount of formalism that will help
us in general.
44

4.2.1

The Interaction Hamiltonian

In general, we want to consider the reaction rate from one state, |ii to another
|f i, where in the case of -decay, for example:
p
|ii = 2Ep~ a
p~ |0i

where were using the appropriately normalized initial state.


Likewise,
p
|f i = 4Ep~ Ep~ bp~ cp~ |0i

where the order of the creation operators dont matter because they commute.
Since we are doing a quantum mechanical calculation, its clear that we will
almost always need to compute the amplitude:
(t, t0 )|ii
hf |U

What is the Unitary operator for this theory? Remember that were working
in the Interaction Representation, which means that operators in the theory
have a time variability.
For our sake, lets only consider the terms in the interaction Hamiltonian.
because the different fields commute with each other in the free-field Hamiltonian, we can look at them all separately.
Rather than give ourselves a headache trying to do all of the integrals over
momentum space, lets do a much simpler example, the QHO. As a reminder:


1

H0 = N +
2
What happens if theres an interaction term in the Hamiltonian (in the Schrodinger
representation) which looks like:
int = a
H

?
Then:

0 (t)
I (t) = U
(t)H
int U
H
0

where:
0
U

= eiH0 t


2 1 2 2 2

=
1 + (i) N t + (i) N t + ... eit/2
2

. Of course, the final expontial terms in each combine


and similarly for U
0
I , we have lots of terms of the form:
multiplicatively to form 1. Thus, to find H
na
m n+m tn+m in (i)m
N
N
n!m!
45

and similarly with a dagger.


It is easy to show that:
and

a
1)
N
=a
(N
a
+ 1)
N
= a
(N

and thus:

na
m = a
m (N
1)n
N
N
N

and similarly for the dagger.


Expanding this out, we get:

a
U
0 U0

1] it[N
] + ...)
= a
(1 + it[N

which is
a
(1 it + ...)
or
a
a
eit

(90)

where it is easy to show the opposite sign:


a
a
eit

(91)

How does this translate to our various interaction terms in our QFT Hamiltonian? Lets just consider 1 term, the one most relevant to our decay:
I = g
H

d x

d3 pd3 p d3 p
1
p
a
p~bp~ cp~ ei(p p p )x
(2)9
8Ep~ Ep~ Ep~

(92)

Okay, it was an awful lot of work just to get the exponent, but that exponent
has the nice property of equalling zero if and only if energy is conserved in our
decay. We will find a similar expression regardless of the type of interaction.
4.2.2

The S-Matrix

Now that weve generated an interaction Hamiltonian we can start trying to


figure out how a particular system will evolve. We need to use the Unitary
evolution operator:

 Z
(t, t0 ) = T exp i
U

I
dtH

t0

which, in the limit of t0 , and t is simply known as the S-matrix


amplitude:

Sf i = hf |S|ii
(93)
The S-matrix will obviously be very important in doing calculations of scattering and decay rates. In general, we are going to work it into a form like:
X 
Sf i = iAf i (2)4
p
(94)
46

Since momentum (and energy, the zeroth component of the momentum 4-vector)
must be conserved on all interactions, there will always be a delta-function.
The normalization is arbitrary, but useful, and the i pre-factor is just standard
notation. It obviously doesnt matter since the square of Sf i will always be
positive.
Clearly, the square of Sf i is going to be related to things like the cross section
and the decay rate, but how?
4.2.3

Fermis Golden Rule

Lets consider two different states, |mi and |ni, with some energy difference,
in an ordinary QM setting. Were still going to use natural units, however.
Were not animals.
To first order:
I = H
int eit
H
so:
(t) = I iH
int
U

dt eit

We can ignore the I (because n 6= m), and thus:


(t)|ni = hm|H
int |ni e
hm|U

it

The probability of a transition is simply the square of this:


(eit 1)(eit 1) = 2 eit eit = 2(1 cos(t))
and thus:



2 

int |ni 1 cos t
Pnm = 2 hm|H
2

This last term on the right has some nice properties. For one thing, its
even. For another:


Z
1 cos t
= t
d
2

It also drops off quickly with . Taking a look of the plot of the function:

47

As t , this term approaches a delta function:




1 cos t
t()
2
This function, unsurprisingly, suggests that energy is supposed to be conserved. Thus:
int |ni|2 T (EN EM )
Pnm = 2|hm|H
or

int |ni|2 (EN EM )


P = 2|hm|H

(95)

which is the exact form that wed hoped for. Energy conservation is guaranteed,
and the pre-factor (for a 1-d system) looks awfully familiar.
4.2.4

The Decay Amplitude

Lets stop with the generalities and start computing the actual amplitude for
our decay. First, lets make our initial and final states explicit:
p
p |0i
|ii = 2Ep a
and

|f i =
or in bra form:
hf | =

p
4Ep Ep bp cp |0i
p
4Ep Ep h0|bp cp

48

Recall equation (92), and thus we get (to first order):


Z
p
Sf i = ig
dt 4Ep Ep h0|bp
cp

d3 x

1
d3 qd3 q d3 q
p
a
q~bq~ cq~ ei(q q q )x
(2)9
8Eq~ Eq~ Eq~

p
2Ep a
p |0i
Z
Z Z Z 3 3 3 s

d qd q d q
Ep Ep Ep
4
h0|
ap a
q bp bq cp cq |0iei(q q q )x
ig d x

(2)
Eq Eq Eq

Each of the contributions:


h0|
ap a
q |0i = (2)3 (~
p ~q)
which quickly yields:
Sf i = ig
or

Sf i = ig
This is dramatically simpler.

d4 xei(p p p )x h0|0i
Z

d4 xei(p p p )x

A note on infinite integrals


The form of our S-matrix element still has a 4-dimensional integral in it over
all of space and time. Handling these expressions can be done in a number of
ways as weve already seen.
Lets start by doing this in 1-d. Ive done the following trick a bunch of
times, but I want to make this explicit. Consider the 1-dimensional integral:
Z L/2

1  ikL/2
eikx dx =
e
eikL/2
ik
L/2
2
sin(kL/2)
=
k
sin(kL/2)
= L
kL/2
= L
where the last step is under the assumption that k 0, and L is finite.
As you know:
Z
eikx dx = (2)(k)

which means that in going from an infinite box to a finite box of length, L, we
can substitute:
(2)(0) L
49

or more generally:
Z

d4 xeip x = (2)4 (4) (p) = V T

(96)

where we imagine integrating over a finite volume and time, and then extending
those limits to infinity.
We will find this particularly useful when we get relations like the square of
a delta function. I assure you, this will ultimately be most useful for keeping
travel of terms.
Returning to the calculation of the amplitude
We are now almost completely done computing our S-matrix. Using the just
derived results, we can convert it to:
Z

Sf i = ig d4 xei(p p p )x
= ig(2)4 (p p p )

Wait! I warned you that the S-matrix would ultimately take the form:
X 
Sf i = iAf i (2)4
p

and it does!
In the case of a first order decay (looking at only at the first order term in
the unitary operator), we get:
Af i = g
(97)
Not the most surprising thing in the world.
This work is not for vain. Having done all of these integrals and gotten such
a simple result is what will allow us to compute the Feynman rules in the
next chapter. We dont have to do these integrals every time.
But how do we turn the amplitude or the S-matrix into an actual decay
rate?
4.2.5

Calculation of the Decay Rate

We have chosen a particularly straightforward problem. 2-body decay is in


many respects much simpler than three or more body decay, or of scattering.
If we assume that the particle is of mass, m and is initially at rest, and that
the particles each have mass, M , then by definition, the outgoing momenta
must be equal and opposite. Energy conservation yields:
p
m = 2 (M 2 + p2F )
where pF is the uniquely determined outgoing momentum:
r 
m 2
pF =
M2
2
50

(98)

Any two body decay will have a unique outgoing momentum (up to the
entirely random direction), while with 3 or more particles, there is clearly going
to be some flexibility.
It should be clear that the probability of of transition between two states
will be proportional to:
P |Sf i |2
since Sf i is simply the amplitude of the oscillation. However, to normalize the
transition, we need to compute:
P =

2
|hf |S|ii|
hf |f ihi|ii

The terms in the denominator are easy. For instance,


p
i = 2Ep a
p |0i

and so:

hi|ii =
=
=

2Ep h0|0i

2Ep (2)3 (3) (0)


2Ep V

With a similar relationship for the outgoing particles. Dont worry. The V s
will ultimately cancel out.
Multiplying it out, we get:
P =

P 2
|Af i |2 [ ( p)] (2)8
8Ep Ep Ep V 3

The square of the delta function produces some difficulties, but remember that
we can replace one of the deltas with
X 
(2)4
p =VT
thus:

P =
or we get a rate of:
=

P
|Af i |2 [ ( p)] (2)4
T
8Ep Ep Ep V 2
P
|Af i |2 [ ( p)] (2)4
8Ep Ep Ep V 2

or, more generally:


= |Af i |2 (2)4

X 
p

51

1
2Ep~I

out

1
2Ep~i V
states

(99)

All we have to do is compute the Af i coefficients (which weve done for this
particular theory) and integrate over all possible outgoing momenta:
Y Z d3 pi 1 
1
=
(100)
|Af i |2 (2)4 (pI pF )
3
(2) 2Epi
2Ep~I
outstates
Youll notice that all of the V s drop out. Youll also notice that this is such a
staggeringly important equation that Ive put a box around it.
Now all that remains is to actually do the integral, where Ep = m, and (in
our case):

m
0

p=
0
0

=
=
=
=

1
d3 p d3 p
(p p p )
(2)2
8mEp Ep
Z 3 3
d pd p
1
2
g
p + p~ )
(m Ep Ep ) (3) (~
(2)2 8mEp Ep
Z
1
d3 p
2
(m 2Ep )
g
2
(2) 8mEp2
Z
p2
g2
dp (m 2Ep )
8mEp2

Z Z

We need to be a bit careful integrating over the delta-function. In reality, since


were integrating over p , we should express it as:
p
(m 2Ep ) = (m 2 M 2 + p2 )

where we know that the value pF is the one that satisfies the dirac-delta function.
Recall:
(x x0 )

(f (x)) =

f (x)|
x=x0

so

(m 2Ep ) =

(p pF )
m(p pF )
=
2pF /EpF
4pF

Thus:

g2

p2
1
dp (p pF )
32pF Ep2

1
p2F
32pF (m/2)2

So, we get to first order in the center of mass frame:


pF
= g2
8m2
52

(101)

Inspection will assure you that this dimensionally has units of energy (or inverse
time, which is appropriate for a rate).
Congratulations! Weve done our first real QFT calculation!
Note also:
The rate of the decay is proportional to g 2 . Higher order terms will include
higher powers of g as well see. The weaker the coupling term (in terms
of g/m) the slower the decay. Also, the more that 1st order calculations
are sufficient.
The larger the mass gap between m and 2M , the larger pF will be, and
thus, the higher the rate of decay. You could generalize this, if you like,
to say that all things being equal, high energy decays will occur faster.
4.2.6

Lessons Learned so Far

I want to reiterate that our decay calculation is not exact. Weve only calculated
the scattering amplitude to first order.
Moreover, the calculations weve done so far only really hold for the scalar
theory weve been working on.
That said, we can already get an idea of how the Feynman calculus is going
to work for us. As a quick reminder, heres how things played out. We started
by imagining an interaction as follows:
p

p
This diagram has exactly 1 vertex, which by construction conserves charge,
and, as well see, momentum and energy. Recall that the whole point of the
calculation was to compute the scattering amplitude, Af i , which we then used in
our decay calculation. This will be the point of all of these diagrams. Ultimately,
each will produce an amplitude and if there is more relevant diagram, well
simply add them together.
So, how do we use this diagram to compute an amplitude WITHOUT using
a bunch of creation and annihilation operators? Using what weve seen so far:
1. Label all external 4-momenta with a label, pi (or with primes, as Ive done
here).

53

2. For each vertex, write down:


4

(ig)(2)

X
i

qi

where Ive generalized the momentum to include internal lines (which


we havent seen yet). Thats why I call it q. In our case, we have 1 vertex,
so:
(ig)(2)4 (p p p )
As you can so, ingoing lines and outgoing lines get an opposite sign. You
need to be consistent here, especially with internal lines that connect two
vertices. A particle that is ingoing to one vertex will be outgoing to
another.
3. The final result should be reducable to:
X 
pi
iAf i (2)4
and from this we can quickly read off:

Af i = g
much faster than our original approach.
4. Once we have the amplitude we compute:
Y  Z d3 pi 
1
V
=
|Af i |2 (2)4 (pI pF )
3
(2)
2Ep~I
out states

out

1
2Ep~i V
states

Of course, decays are not the only possibility. We are also very much interested
in scattering. That will be the topic of our next set of notes, and with it,

54

Scattering and Feynman Rules

Gross: 9.3-9.5
Scattering is a bit more complicated than decays, but the form is nearly
identical. At issue is the fact that the rate of interaction is related to the relative
speed of the interacting particles. Very generally, for a 2-particle scatter we may
say:
d =

X  Y
1
d3 pi 1
1
|Af i |2 (2)4
p
4E1 E2 |~v1 ~v2 |
(2)3 2Ep~i
out states

(102)

If you havent done anything with cross sections before, the idea is that the
rate at which a particular scattering event occurs is something like:
= nv
In this case, the reason that its a d is because its the cross section corresponding to a very specific result two particles flying off at specified momenta
and directions for example. Well integrate and normalize in due course.
While I havent derived the cross-sectional relationship by any means, it
does seem reasonable, especially because it includes a scattering amplitude Af i ,
squared.
Unlike with our decay, we are going to have to introduce higher order diagrams. For instance:

p1
p2

p2

p1

This is the story of two charged particles scattering off one another by exchanging a .

5.1

The Propagator

I grant you that I basically pulled the scattering cross-section relationship out
of thin air. We will get to it in due course, but for now, our biggest issue comes
from the fact that we have an internal line in the diagram. This shouldnt be
a surprise.
55

Lets consider the initial and final states of the system:


|ii cp1 cp2 |0i
and
hf | h0|
cp1 cp2

It should be clear that all 1st order terms in the Hamiltonian produce a zero
amplitude. For instance:
int eit |ii =
hf |H
=

c ca
)
cp1 cp2 |0i
h0|
cp1 cp2 (
0

I picked only 1 term in the Hamiltonian (and didnt bother with the momentum
subscripts), but its obvious by inspection that all of them will produce zero
amplitude.
Instead, were going to need the 2nd order term in the unitary operator:
Z t
Z t
2


(i)
Hdt
Hdt
t

t0

In this case, the creation and annihilation terms are going to look like:
(
cc a
)(
cc a
)
where (roughly) the term in the left parentheses correspond to events at the
left vertex, and the right term corresponds to the events in the right vertex.
Notice also that its completely arbitrary which one gets the creation of the
and which one gets the annihilation.
Unlike with our 1-vertex calculations, were going to have the possibility of
interactions at two different points in spacetime. Writing it out explicitly yields:
q
Sf i = (i)2 g 2 h0|
cp1 cp2 4Ep1 Ep2
!
Z
Z 3 3 3
1
d q1 d q2 d k
i(q1 q2 +k) x
4
p
k cq2 e
cq1 a

d x
(2)9
8Eq1 Ek Eq2
!
Z
Z 3 3 3
1
d q3 d q4 d k
4
i(q3 q4 k ) y
p

d y
k cq4 e
cq3 a
(2)9
8Eq3 Ek Eq4
p
4Ep1 Ep2 cp1 cp2 |0i
Dear god! That looks terrifying! But really, its not as bad as it looks. For
one thing, the limits of the time intergation simplify.
But before I do that, Ive already done a bit of a switcheroo on you. Normally, a 2nd order Taylor series expansion looks like:
ex = 1 + x +

x2
+ ...
2

But Ive neglected the 1/2 out front. Thats because the labeling of which
outgoing particle is p1 and which is p2 is arbitrary.
56

p1
p2

p2

p1

Of course, the labeling of which one gets a p1 and which one gets a p2 is
arbitrary, and thus there is another nearly identical diagram with the two of
them switched.

p1

p1

p2

p2

In other words, were essentially doubling up on our amplitudes. If youd


like a more rigorous argument, check out the discussion of Wick contraction in
Tong or Gross.
Its clear, though, that all of those creation and annihilation operators help
us out, but not as much as youd immediately suppose. Only the a
terms cancel
immediately:
k |0i = (2)3 (~k ~k )
h0|
ak a
This doesnt appear to simplify the expression much, but it does help a bit:
Z
Z
Z 3 3 3 3 s
d q1 d q2 d q3 d q4 Ep1 Ep2 Ep1 Ep2
2 2
4
4
Sf i = (i) g
d x d y
(2)12
Eq1 Eq2 Eq3 Eq4

ei(q1 q2 ) x ei(q3 q4 ) y h0|


cp1 cp2 cq3 cq1 cp1 cp2 cq2 cq4 |0i

Z
1 ik (x y )
1
e

d3 k
(2)3 2Ek

You may recognize the last term as the propagator:


Z
Z

d4 k
1 ik (x y )
i
1
e
=
eik (x y )
F (x y) = d3 k
3
4

2
(2) 2Ek
(2) k k m
57

We can also work outward from our various creation and annihilation operators, yielding:
p1 ~q1 )(p~2 ~q3 )(~
p1 ~q2 )(~
p2 ~q4 )
h0|
cp1 cp2 cq3 cq1 cp1 cp2 cq2 cq4 |0i = (2)12 (~
This simplifies things dramatically:
Z
Z
d4 k i(p1 p1 +k) x i(p2 p2 k) y
i
2
4
4
Sf i = g
d xd y
e
e
(2)4
k k m2 i
where Ive put the extra term in the propagator because I know Im going to
need it later.
In this form, the only place that we have space or time dependance is in the
exponents, and so we quickly get:
Z
1
Sf i = g 2 d4 k(2)4 (p1 p1 + k)(p2 p2 k)

k k m2 i
This delta-function makes things easy. I can integrate over k and noting the
second function, I get:
k = p2 p2
Important note: This relationship does not necessarily guarantee that:
k k 6= m2
Clearly, if the effective mass of the mediating is on the mass shell the
contribution is maximum (since the denominator goes to zero), but the effect is
not guaranteed. This reduces the expression to:
Sf i = ig 2 (2)4 (p1 + p2 p1 p2 )

(p2

p2 ) (p2

1
p2 ) m2 i

(103)

or, using or standard amplitude relation:


Af i =

g2
(p2 p2 )2 m2 i

(104)

Wow! That was hard.


But with practice, its clear that we dont need to do all of these integrals
directly at all. Rather, to compute the scattering amplitude, Af i , we simply
need to follow a bunch of rules. These will be known as Feynman Rules and
the Diagrams used to draw them, the Feynman diagrams.

5.2

The Feynman Rules for the Scalar Yukawa Interaction

We can use this to compute the amplitude Af i . Weve come up with an expression solve for this for the two-vertex system. To make things concrete, were
going to use this diagram:
58

p1
p2

p2

p1

The Feynman Rules will (naturally) produce the same result. Af i is the
integrated product of the following:
1. External momenta are described by four-vectors, pi . Weve already seen
that the delta function in the overall scattering relation demands that:
X
pi = 0
i

where by convention, outgoing momenta get a positive sign, and ingoing


get a negative.
2. Label each internal line with a value ki . Dont do anything with the
integral yet. This is just for the labeling purposes. Arrows pointing into a
vertex contribute a positive 4-momentum. Arrows pointing out contribute
a negative 4-momentum.
3. For each vertex, write down:
4

(ig)(2)

X
i

qi

(105)

where qs include the external lines as well. In our example above, wed
have:
(p1 k p1 )
for the left vertex, for example.
4. For each internal line, write the propagator (and an integral):
Z
d4 ki
i
(2)4 ki2 m2 + i

(106)

This propagator works equally well for our particles, but obviously not
for vector or spin-1/2 particles.

59

The final result will be:

X
pi )
iAf i (2)4 4 (
i

where in our shorthand, outgoing momenta get a negative sign.


This prescription will allow us to solve the scattering amplitudes and decay
rates for just about anything. Just draw all possible diagrams, compute the
amplitudes, and add them together. Of course, this assumes that the result will
converge with only a few diagrams. If g is large, this may not happen simply.

5.3

Example: Scattering

Were now ready to compute our scattering amplitude for whatever interaction
we like. The first step, as weve seen, is to draw all possible diagrams. For each.
5.3.1

Calculating the Amplitude

Weve already seen that there are two diagrams that contribute at 2nd order.
One with p1 coming out from the left and one coming out from the right.
(The space-like component of the diagrams are kind of arbitrary). Lets compute
just one of the amplitudes:
Step 1: Nothing to do, cause its already labeled.
Step 2: In my mind, Ive put a k in the diagram.
Step 3: There are two vertices, yielding:
(ig)2 (2)8 (p1 k p1 )(p2 + k p2 )
Step 4: Taking the result from before, we get:
Z
1
ig 2 d4 k(2)4 2
(p1 k p1 )(p2 + k p2 )
k m2 + i
The first delta-function yields:
k = p1 p1
of course, and thus the second delta function becomes:
X
pi )
(p2 p2 + p1 p1 ) = (
i

exactly as expected. Thus, we get:

or:

X
1
= ig 2 (2)4
hf |S|ii
pi

(p1 p1 )2 m2 + i
i
Af i = g 2

1
(p1 p1 )2 m2 + i

This is exactly what we found doing it the hard way in equation (104).
Wed get a second term by making p1 p2 .
60

(107)

5.3.2

2 Particle Scattering Cross Sections (in general)

So weve got an expression for our amplitude, but what does that tell us about
the actual scattering cross section?
We can compute the differential cross section in equation (102). For a 2-2
particle scattering experiment, we get:
d =

1
1
|Af i |2 (2)4 (pF pI )
16 E1 E2 E1 E2 V 2 |~v1 ~v2 |

This will be made much easier if we work exclusively in the center-of-mass


frame. That is:
p~1 + p~2 = p~1 + ~p2 = 0
Further, remember that:
~v =

p~
E

in our coordinates, and thus:


p
(~
p1 /E1 p~2 /E2 )2
s
2

1
1
= |~
p1 |
+
E1
E2
s
2
E1 + E2
= |~
p1 |
E1 E2
E1 + E2
= |~
p1 |
E1 E2

|~v1 ~v2 | =

Plugging this into our expression simplifies things somewhat:


d =

1
Af i |2 (2)4 (pF pI )
16|~
p|(E1 + E2 )E1 E2 V 2

To get the total cross section, we need to integrate over:


Z Z 3 3
d p1 d p2
=
d
(2)3 (2)3
and cancel whenever possible.
The function is going to help us simplify things, but first, lets break up
the delta function into:
!
!
X
X
p~i
Ei

Im going to keep this general, for now, and assume that there are potentially
two different particles, each with mass mj participating in the scatter. We know
that m1 = m2 , but that is specific to our problem.

61

Thus, the first delta-function becomes:




q
q
2
2
2
2
E1 + E2 m1 + p~1 m2 + p~2
and thus:


p
p
2+p
2
2+~
2
m
m

E
+
E

~
p
1
2
1
1
2
2
(2)
p
p
d =
(~
p1 +~
p2 )|Af i |2
2
2
2
2
2
16|~
p1 |(E1 + E2 )V
m1 + p~1 m2 + p~2
4

Thus, when we integrate to get , we can immediately integrate over d3 p2 and


cancel out the second delta-function. Yielding:


p
p
Z
E1 + E2 m21 + p~2
m22 + ~p2
1
2
1
d3 p1
p
p
|Af i |2
=
2+~
2
16(2)2 |~
p1 |(E1 + E2 )V 2
m21 + p~2
m
p
1
2
2
From now on, Im going to (for simplicitys sake), simply call:
Pi = |~
p1 |

with Pf similarly defined. As a result:


d3 p1 = Pf2 dPf d
where
d = sin dd
Of course, for our simple system, conservation of momentum and energy
(and the fact that were scattering identical particles), means that Pi = Pf (but
never mind that).
Thus, the incremental cross section is:


p
p
2 + P2
2 + P2
Z
2

E
+
E

m
m
1
2
Pf dPf
1
2
F
F
1
d
p
p
=
|Af i |2
2
2
2
2
d
16(2)2 Pi (E1 + E2 )V 2
m1 + PF m2 + PF
Using the trick from before, where

(f (x)) =

(x x0 )
f (x)|x0

It is a pain to do, but:




p
p
Z
E1 + E2 m21 + PF2 m22 + PF2
1
p
p
dPF
=
2
2
2
2
PF (E1 + E2 )
m1 + PF m2 + PF
where Ive re-defined PF as the solution to the delta function.
Thus, we get the pleasingly simple relation:
d
PF
|Af i |2
=
2
d
16(2) (E1 + E2 )2 PI
62

(108)

5.3.3

The cross section

Weve done all of the heavy lifting. The result from the previous section is
general for all 2-2 particle scattering problems in the COM frame. Now, all we
need to do is show how this scales with our actual scattering amplitude. Recall
that we had:


1
1
2
Af i = g
+
(p1 p1 )2 m2
(p1 p2 )2 m2
where Ive gotten rid of the because we dont have a singularity, and as a
reminder, m is the mass of the mediator particle.
Rather than solve this in generality, lets solve it for our particular case, and
with our incoming particle 1 moving along the x-axis. In that case:


(~
p1 ~
p1 )2 = PI2 (1 cos )2 + sin2
=

2PI2 (1 cos )

and thus:
(p1 p1 )2 = 2PI2 (1 cos)
(Notice the sign).
Since p~2 = ~
p1 , we get:
(~
p1 ~
p2 )2
Thus:
Af i = g

=
=




PI2 (1 + cos )2 + sin2
2PI2 (1 + cos )

2(PI2 + m2 )
4PI4 sin2 + m2 (1 + 4PI2 )

(109)

We could do this in a number of limits, but supposing we want to do something kind of like electron scattering, in which case, m 0 (for a photon
remember, this isnt real E&M, which is mediated by a vector field. Its just an
approximation). In that case, we get:
Af i
and thus:
1
d
=
d
64

g2
PI2 sin2

g2
(2)EPI2 sin2

2

(110)

Of course, a 2-body problem looks a lot like a problem in which 1 body scatters
off a stationary target (Rutherford scattering), which case, there is a sin4 (/2)
term. Whoa! This is exactly equivalent!

63

5.4

Particle Interaction Energy

Lets suppose you had two, , particles held at rest at positions ~x, and ~y. What
is the energy of interaction?
First, lets consider the basic setup:
x)(~
y )|0i
|ii = (~
which creates two particles at positions, ~x and ~y . Were not too worried about
the normalization, but its clear that this is just an approximation, because
were implying that we can create a particle with fixed position, and quantum
mechanics forbids that sort of certainty.
Likewise, The final state is nearly identical. Roughly speaking, the initial
(and final) state looks like:
Z Z 3 3


d pd q
1
i(~
p~
x+~
q~
y)
p
|ii =
|0i
c

e
(2)6
4Ep~ Eq~ p~ q~

I dont want you to worry too much about the normalization or the missing
creation and annihilation operators. The upshot is that we have two creation
operators at work here.
In fact, for simplicity, Im going to ignore the integration factor entirely.
Now suppose we want to compute the interaction energy between the two
particles? Naively, youd expect this to be:
?

int |ii
Eint = hi|H
Your intuition would be wrong.
The interaction Hamiltonian density is:
Hint = g
After all, working things out (and ignoring integrals), this relation yields:
h0|
cc (
c + b)(b + c)(
a + a
) c c |0i
where Ive lazily also left out the subscripts, p and the like.
That (
a +
a) is a killer. Since we start with no particles and end with none,
is automatically makes the entire expression 0. We need to be a bit cleverer.
Remember that we have the evolution relation:
(t, t0 )|ii
|f i = U
and that if we have an energy Eigenstate of the system:

 Z

U = exp i HI (t )dt 1 iEI t


Or, to put it another way, if we evaluate:
t0 )|ii
hi|U(t,
64

Then we get two terms: The first is a constant. We can ignore that. The second
is a measure of the energy of interaction.

So what is U?
Z
(x)
(x)
(t, t0 ) = I + (ig)T d3 xdx0 (x)
U
21

+ (ig)

Z Z

(x)
(x)(t)
(y)
(y)
d3 xd3 ydx0 dy 0 (x)

The first term on the right we ignore. The second term, weve already shown
doesnt contribute. So we care about the second term. In particular. Taking
1)|ii
hi|(U
we get the product of two terms, one relating to the fields, and one relating
to the field:
Et

(x)(y)
(y) c c |0i
h0|
cc T(x)
Z Z
0
(y)dx

T(x)
dy 0 y|0i
g 2 h0|

The term on the top looks complicated, but all you need to get from it is that
there are an even number of creation and annihilation operators, meaning that
the term is non-zero, and a constant.
The second term can be treated entirely independently since and commute with one another.
Since we have:
(y)|0i

F (x y) = h0|T(x)

our energy term can simplify to:


Z Z
Et = Cg 2
dx0 dy 0 F (x y)
Z Z Z
1
d4 k
= Cg 2
dx0 dy 0
eik(xy)
(2)4 k 2 m2
Z
Z
Z
0 0
0 0
1
d4 k
i~
k(~
x~
y)
e
= Cg 2 dx0 eik x
dy 0 eik y
4
2
2
(2) k m
Z
Z

0 0
1
d4 k
~
eik(~x~y) 2(k 0 )
= Cg 2 dx0 eik x
4
(2) (k 0 )2 ~k 2 m2
Z
Z
3
1
d k
~
eik(~x~y)
= Cg 2 dx0
(2)3 ~k 2 m2
Z
d3 k
1
~
2
eik(~x~y)
= Cg t
3
2
2
~
(2) k + m
Our energy relation appears much simpler now.
We will do our integral in spherical coordinates:
d3 k = sin ddk 2 dk
65

and define:
We can even (for simplicity) assume that ~x ~y = rk,
u = cos
and hence:
and further note that
E

du = sin d
d = 2. This yields:

Cg 2
=
(2)2

1
k dk 2
k + m2
2

dueirku

1
irk

Z
e
eirk
1
Cg 2
2
k
dk
=
(2)2
k 2 + m2
irk
2 Z
Cg
k
sin(rk)
= 2
dk 2
4 r k + m2
Where a factor of 1/2 shows up because technically, the integral should only be
from 0 to .
This integral is a bit tricky. The easiest way to solve it is to imagine that k
can be complex. In which case, there is a root at k = im:
k 2 + m2 = (k + im)(k im)
Thus, taking a clockwise integral over the top half of the complex plane, encompass the k = im root.
Z
Z
k
k
1 dk
dk
sin(rk) =
eikr
k

im
k
+
im
i
k

im
k
+
im

im
= 2i
(i)emr
im + im
= emr
Plugging in this integral, we get:
Eint =

Cg 2 mr
e
4r

(111)

Energy scales as inverse distance, produces an attractive force, and drops off
quickly if we have a massive mediating particle.

5.5

Example: Annihilation

Lets do one more example, and consider a massive , and a massless .

66

p1
p2
k
p1

p2

Lets just use the Feynman rules that weve seen before:
1. External lines are labeled.
2. Internal lines are labeled.
3. Vertices
g 2 (2)8 (p1 p1 k)(p2 p2 + k)
4. Internal Lines:
Z
ig 2 d4 k(2)4 (p1 p1 k)(p2 p2 + k)

k2

1
M 2 + i

Notice that the internal line is a , not a , so we have to use the appropriate mass.
This, of course, intergates to:
ig 2 (2)4 (p1 + p2 p1 p2 )

1
(p1 p1 )2 M 2 + i

and so, combining this term with p1 p2 , we get:


Af i =

(p1

g2
g2
+

2
2
M + i (p1 p2 ) M 2 + i

p1 )2

This looks almost identical to our result from scattering from before.
Lets suppose, for convenience, that ~p1 = ~
p2 . Further, by energy conservation, it is clear that our photons will fly out with:
p
Ep1 = Ep2 = M 2 + |~
pI |2
The actual cross section is almost identical to the scattering result
above, except for the fact that the output and input dont have the same mass.
Suppose that particle 1 is moving initially along the z-axis with momentum,
pI (and in particular, we might be interested in knowing what happens if this

67

speed is much less than the speed of light). In that case, ingoing 4-momentum
(of the 1st particle) is:
p

M 2 + PI2

p1 =

0
PI

and the outgoing 4-momentum of the first photon in:


p

M 2 + PI2
p
2
2

pM + PI sin cos
p
1 =
2
2

M
p + PI sin sin
M 2 + PI2 cos

where pI is the momentum of the incoming particle(s).


Thus:
q
(p1 p1 )2 M 2 = PI2 (M 2 + PI2 ) + 2PI M 2 + PI2 cos M 2


q
2
2
2
2
= 2 M + PI 2PI cos M + PI

And if p1 goes off at angle then the other particle goes off at , so:


q
2
2
2
2
2
2
(p1 p2 ) M = 2 M + PI + 2PI cos M + PI
With some fairly straightforward math, we can then solve:

2(E02

1
1
1

= 2
2
PI E0 cos ) 2(E0 + PI E0 cos )
M + PI2 sin2

where I defined E0 as the energy of the ingoing (or outgoing) particles.


Thus we get:
g2
Af i = 2
M + PI2 sin2
The amplitude is greatest when 0, so the outgoing photons are preferentially
emitted at the poles (the same direction as the incoming s).
As a reminder, the differential cross section for scattering is:
!
Y
X
1
1
1
2
4
pi
|Af i | (2)
d =
4E1 E2 |~v1 ~v2 |
2E
p
~i
out states
i
where for 2 outgoing particles, we get:
Z Z 3 3
d p1 d p2
1
1
=
|Af i |2 (2)4 (p1 + p2 p1 p2 )

6
(2) 16E1 E2 E1 E2 |~v1 ~v2 |
68

Of course, all of our ingoing and outgoing energies are the same, E0 . Likewise,
because we were smart and work in the center of mass frame, the space-like part
of the delta-function looks like:
(~
p1 + ~p2 )
and the timelike part yields:
(2E0 |~
p1 | |~
p2 |)
Combining all of this yields
Z 3
d p1 1 E0
1
4
= g
(2E0 2PF )
(2)2 16E04 2pI (M 2 + PI2 sin2 )2
Z
1
2p2F dpF sin d 1 E0
= g4
(2E0 2PF )
4
2
2
(2)
16E0 2pI (M + PI2 sin2 )2
Z
1
(E0 PF )
dpF sin d 1
= g 4 p2F
3
2
2
2
2
64
E0 PI (M + PI sin )
2
Z
1
1
sin
d
= g4
128 E0 PI (M 2 + PI2 sin2 )2
The integral:
Z

sin d

1
(M 2 + PI2 sin2 )2

ends up being quite ugly. However, as an approximation, it is:


8PI2
2

+ ...
4
M
3M 6
In other words, for low speed collisions PI << M , this expression becomes:

g4
128 2 M 6 v

(112)

Note that this has units of [E]2 , exactly as wed expect it to.

5.6

Example: Higher order corrections in decay

As a final example (albeit one that were not actually going to compute the
decay rate for), lets consider a single higher-order correction to our decay
problem from earlier. Once again, were assuming both and are massive.

69

p2

p3

p1

This is a bit tougher than before. Its not tough to follow the Feynman rules,
but solving the integral will be a challenge.
1. My external lines are labeled.
2. My internal lines are labeled.
3. Labeling the vertices:
(i)3 g 3 (2)12 (p1 k + k )(k k k )(k p2 p3 )
4. 3 internal lines:
Z Z Z
3
ig
d4 kd4 k d4 k (p1 k + k )(k k k )(k p2 p3 )

i
i
i
k 2 M 2 + i k 2 M 2 + i k 2 m2 + i

The various delta-functions simplify things a bit:


k = p2 + p3
and
k = k k = k p2 p3
so
p1 k + k = p1 p2 p3
and so the calculation reduces to:
Z
1
1
1
3
g
d4 k(p1 p2 p3 ) 2
k M 2 + i (k p2 p3 )2 M 2 + i (p2 + p3 )2 m2

70

We now have to actually do the integral over k. Certain terms can be taken
outside (and even evaluated), but we are left with:
Z
1
1
d4 k 2
k M 2 (k p4 )2 M 2
where Ive simplified:

m
0

p4 = p2 + p3 =
0
0

for ease of writing, and where Ive gotten rid of the terms until and if they are
needed.
Lets consider this integral. We can simplify the term a bit:
d4 k = 4k12 dk1 dk0
such that:
k 2 M 2 = k02 k12 M 2
and
(k p4 )2 M 2 = (k0 2m)2 k12 M 2
where k1 is the magnitude of the momentum and k0 is the energy. Thus:
Z
Z
1
4 dk1 k12
dk0 2
2 M 2 )((k m)2 k 2 M 2 )
(k

k
0
0
0
1
1
The inner interval has two roots:
E1 k0 =
and

q
k12 + M 2

E2 k0 = m +

q
k12 + M 2

So the inner integral could be rewritten:


Z
1
dk0
(k0 E1 )(k0 + E1 )(k0 E2 )(k0 + E2 )
0
which, for positive values clearly has two poles, making this integral tough. Of
course, we can use the same trick as before:
Z
1
1
=
dk0
2
(k0 E1 + i)(k0 + E1 i)(k0 E2 + i)(k0 + E2 i)


1
2i
2i
=
+
2 2E1 (E1 E2 )(E1 + E2 ) 2E2 (E2 E1 )(E1 + E2 )

1
i
2 E1 E2 (E1 + E2 )
71

5.6.1

A First Stab at Renormalization

Believe it or not, that was the easy part. Our integral over k now becomes:
Z
k12
p
p
2 2 i
dk1 p 2
2
2
( k1 + M )(m + k1 + M 2 )(m + 2 k12 + M 2 )
0

Or, if we consider the contribution to the overall integral, we get:


Z
k2
p 1
p
4 2 ig 3 (p1 p2 p3 )
dk1 p 2
2
( k1 + M 2 )(2m + k1 + M 2 )(m + k12 + M 2 )
0
yielding

k12
p
p
dk1 p
2
2
( k1 + M 2 )(m + k1 + M 2 )(m + 2 k12 + M 2 )
0
(113)
which is super-nice, except for that integral.
The problem here is that at large values of k1 , this thing diverges. After all,
its clearly:
Z
dk1
k1
which is bad at infinity, and yields an infinity.
This is a real infinity, and kind of a disaster. It is also our first introduction
to renormalization. Im not going to this in generality. The more general approaches (applied later) leave the solution as Lorentz invariant. However, youll
get a flavor for how it works with the following.
First, imagine multiplying in a fudge factor:
Af i =

1
g3
(2)2

2
2 + k12
Clearly, this invokes a cutoff on a scale, . On the other hand, for suitably
large, it does nothing. This is a regularization term and allows our integral
to be finite.
Im not going to do the exact integral. There are several reasons. First,
this isnt actually how youd do the renormalizations. Secondly, if we actually
wanted to compute the exact 3rd order correction to the transition amplitude,
wed need to use all 3 vertex diagrams. Nevertheless, let me give you a sense of
how everything comes out. Ill write it in dimensional terms.
To make things concrete, well suppose that the particles are massless,
M = 0.
Af i

2
k1
(m + k1 )(m + 2k1 ) 2 + k 2
0
Z m
Z
k1 dk1
dk1
3
g 3

g
2
m
k1
0
 m

= g 3 B g 3 C ln
m
g 3

dk1

72

Theres a perturbation in the effective transition amplitude that is dependent


on an (unknown) maximum scale presumably the Planck scale, but really, it
could be anything.
This isnt as big a deal as you might think. After all, in practice, we dont
ever measure the bare term. We only observe the measured term.

5.7

Example: A Simple Mass Perturbation

Consider, for instance, the mass of a particle. Weve been treating it as part
of the free field solution, but theres no reason that we couldnt imagine it as a
perturbation to the Hamiltonian:
Z
1 2

Hint = m
d3 x(x)2
2
We could then ask about the amplitude of a 1-vertex diagram (just to understand what it means).
, p2

, p~1 = 0

We almost immediately get:

1 2
m
2
In other words, this diagram effectively measures the mass of a particle. But
what happens when we look at a perturbation?
, p2
Af i =

, p1
73

A little bit of math will get a similar result to the previous section:

 

2
Af i,2nd order = g B + C ln
M
or to put it another way, the measured mass and the true mass are related via:

 

m2meas = m2bare + 2g 2 B + C ln
M
where the latter term can be arbitrarily large. But thats okay (within reason).
We only care about the combination of terms.

74

The Dirac Equation

Gross: 5, 7.4, 8.3-8.6


Tong: Chapter 4, 5.1-5.2
Thus far, weve discussed scalar fields only. In reality, we know that particles
have spin. The fermions have spin=1/2, the bosons, spin=1. (The Higgs if/when
it is discovered will have spin=0, and the Graviton, spin=2, and these are also
bosons, but lets ignore them for the time-being).
I introduced the Klein-Gordan equation previously as a solution to a relativistic field, but it actually does double duty. It is also the evolution equation
for a single relativistic, scalar particle. It has a fatal flaw, however. Its 2nd order in time, which means that the state of the system is insufficient to determine
the future evolution.
In this section, well talk about the Dirac equation, the evolution equation
for spin=1/2 particles. Note that this is not field theory. The results of
this study will be important for field theory, however, and toward the end of
these notes, well talk a bit about quantizing the Dirac field.

6.1

1st order vs. Lorentz Invariance

Ideally, we want an equation of the form:

i = H

(114)

which is the time-dependent Schroedinger equation. I know were not doing


field theory, but ~ still equals 1.
Now, consider a particle at rest. Since its a relativistic particle, we have the
relationship:
(p0 )2 m2 = 0
which has the solutions:

p0 m
0

p +m

= 0
= 0

either of which could be used to write an equation linear in energy (and thus
produce a differential wave equation linear in the time derivative).
Now, I know what youre thinking: Only the first solution is physically
viable. Theres no such thing as a negative energy (which is what p0 is, after
all). Diracs particular genius lay in the fact that he was willing to follow this
reasoning through to the end, regardless of the fact that it seems impossible.
But now, consider the momentum of the particle if it gets boosted. Clearly,
under those circumstances, the factoring to solve for energy becomes a bit
more complicated. We know, of course, that:
p p m2 = 0
75

but factoring this must include terms that look like:


p p m2 = ( p + m)( p m) = 0

(115)

Where wed want to figure out the elements of and .


The basic idea is that wed have a free wave-function along the lines of:

eip

where the operator, p and the actual components of p can be related via:
p = p
and where we have our standard operators:
i = p

(116)

and p~ in ordinary nonwhich is exactly the same thing that we have for H
relativistic quantum mechanics (with ~ = 1, of course).
Supposing all of this works out, then the linearized version of the factored
momentum equation can be written as a wave equation:
i m = 0

(117)

This is the Dirac Equation. And now is as good a time as any to give you a
little shorthand for this. The Dirac notation for these s allow us to write:
p/ = p

(118)

and similarly for any other combination of a and a vector, 1-form, or operator.
Its just a sum over 4 terms.
At any rate, the Dirac equation will clearly be satisfied if:
p m = 0
which is our positive solution.
What are the values of and ? It might help if we multiply out eq. (115):
p p m2

=
=

( p + m)( p m)

0 0 p0 p0 + 1 1 p1 p1 + ...
+( 0 1 + 1 0 )p0 p1 + ...
+( 0 + 0 )mp0 + ...
m2

It is clear by construction that of the four lines of expanded terms, only the first
and last dont vanish. The third line immediately yields:
=
76

While the 1st and 2nd can readily be combined to yield:


{ , } = 2

(119)

It is clear that cannot be ordinary numbers. The elements of need to


themselves be 4 4 matrices! Im not going to derive this, but it is easy to show
that we have the correct relation by multiplying out term-by-term.
The Dirac Representation version of the matrices are:


1 0
0
=
(120)
0 1
i =

0
i

i
0

where 1 actually means the 2 2 identity matrix, and i is the ith


matrix, which in case youve forgotten are:


0 1
1
=
1 0


0 i
2 =
i 0


1 0
3 =
0 1
The Dirac Representation version of the matrices are:


1 0
0
=
0 1
i

0
i

i
0

where 1 actually means the 2 2 identity matrix, and i is the ith


matrix, which in case youve forgotten are:


0 1
1 =
1 0


0 i
2 =
i 0


1 0
3
=
0 1
The Dirac Representation version of the matrices are:


1 0
0 =
0 1
77

(121)
Pauli

(122)
(123)
(124)

(125)

(126)
Pauli

(127)
(128)
(129)

(130)

0
i

i
0

where 1 actually means the 2 2 identity matrix, and i is the ith


matrix, which in case youve forgotten are:


0 1
1
=
1 0


0 i
2 =
i 0


1 0
3 =
0 1

(131)
Pauli

(132)
(133)
(134)

These matrices and the form is not unique. Tong uses whats known as Weyl
representation, for example, which produces a non-diagonal 0 matrix. The
Weyl representation is usefully compact when dealing with theories that have
a distinct handedness (like the weak force), but wont be terribly useful for us
now. Ill stick with Dirac representation for now.
One more thing. Now that we have our matrices, we could quickly derive
a number of very useful relations with the slash notation, including:
tr(/
a/b)
a
/
a
//b

= 4a b
= 2/
a

= 4a b

(135)
(136)
(137)

These will be very useful later when were actually trying to compute crosssections and whatnot.
In the meanwhile, assure yourself that the slashed and relations work. Ill
wait.
Dimensionally, all of this means that the wave-function in the Dirac equation( 117) must be a vector of 4-components:

1
2

=
3
4
Note: This is NOT a 4-vector. This is a Dirac Spinor, and its going to have
some interesting properties.
I want to return (briefly) to the Dirac equation itself. Youll note that it is
a first order operator in time. Noting that, we can determine the Hamiltonian
(which is, after all, the time evolution operator for a wave-function), by breaking

78

things up:

= 0H
= 0

i 0 0
(i ) m)

= (i 0 0 + i i i m)
= (i i i + m)

i 0 0

0 H

= (i i i + m)

0 = i 0 i i + 0 m
H

(138)

Of course, in principle, we could add an interaction term as well:


I = e 0 A
H
which is the electromagnetic potential. For now, though, well restrict our discussion to free space.

6.2

Solutions to the Dirac Equation

For simplicity, lets suppose our solution to the Dirac equation is independent
of position. In that case, we have:
i 0 m = 0
Write it all out, and you realize that the first two terms in are independent
of the second two. Thus, we may think of as:


A
=
B
where each of A and B have two elements. the spatially invariant Dirac
equation simplifies to:
A
B

= imA
= +imB

Its clear that in the free-field case, both are going to have an energy term in
the exponential, but with a different sign.
We could go through a fair amount of derivation, but Im simply going to
state the free-field solutions to the Dirac equation, and you can verify that a)
work, and b) are independent. The four solutions are:

u(1) (p) =

E + m

79

1
0

pz
Ep
~ +m
x
p +ipy
Ep
~ +m

(139)

u(2) (p) =

v (1) (p) =

v (2) (p) =

E + m

0
1

px ipy
Ep
~ +m
z
Ep~p+m

(140)

px ipy
Ep
~ +m
z
Ep~p+m

(141)

(142)

E + m

0
1

E + m

pz
Ep
~ +m
px +ipy
Ep
~ +m

1
0

Ill justify both the form and the normalization out front in a moment.
These forms are static, of course (Heisenberg representation). To turn them
into plane-waves (Schroedinger representation), wed need to multiply:
u(s) eip x

6.3

What the Dirac Solutions Mean 1: Solves the Dirac


Equation

Having gone through all of the effort of solving the Dirac equation, and finding
4 solutions, your first question might be why we are going to all of the effort.
There are at least three reasons. The first, is that these equations solve the
Dirac equation, which means that that superpositions of them also satisfy the
Dirac equation.
This may not seem like a huge deal, but hold on.

6.4
6.4.1

What the Dirac Solutions Mean 2: Orthogonality and


Currents
The Adjoint Spinor

One of the most important properties of the Dirac spinor is that, much like
the wavefunction in ordinary non-relativistic quantum mechanics, we expect a
quadratic scalar combination of terms to yield something like a conserved and
Lorentz-invariant quantity. In our case, we will define a very important object,
the Adjoint Spinor, defined as:
= 0

80

(143)

which is known as the adjoint spinor. The product:

is Lorentz invariant. And what is it? Well, for u(p), we get:

1
p

2 
0
z
0
1 0 Ep~p+m
u(1) u(1) =
Ep~ + m
z
+ E p+m
p
~
0


2
pz
= (Ep~ + m) 1
(Ep~ + m)2
= 2m

ip x ip x
e e

Without that crazy normalization up front, its clear that the result would be
a function of Ep~ and therefore not Lorentz invariant. But now notice what
happens when we consider v(p)v(p). We now end up getting 2m.
6.4.2

Orthogonality

The Dirac and Adjoint spinors have a number of very important properties.
Most notably:
u(r) u(s)
u

(r) (s)

=v

= 2mrs

(144)

(r) (s)

= 0

(145)

(r) (s)

= 2mrs

(146)

u
v

In other words, they are orthogonal. But not only that, they are complete:
These solutions also have some rather nice relations to one another. For
example, if we consider the outer product:
X
u(s) (p)u(s) (p) = p
(147)
/+m
s

v (s) (p)v (s) (p)

6.4.3

= p
/m

(148)

The Conserved Norm

Clearly, the quantity:

is going to be important for any particular Dirac field. It represents something


like the total amount of energy, but not really the total number of particles. We
will see that there are truly conserved currents as well:
j =
It can be shown, for example, that for both u and v modes, j 0 yields 2m.
81

(149)

More specifically, it can be shown that:


u(s) u(s) = v (s) v (s) = 2p

(150)

(The sign is independent of u vs v). This is a very useful relation, indeed.


I should also add that we will include some additional currents later when
we compute the stress-energy tensor. But first, well need to write down the
Lagrangian.

6.5

What the Dirac Solutions Mean 3: Operators and


Transforms

The third, and arguably most important property of a wavefunction is that it


should interact with operators. In particular, well want to think about eigenvalue relations:
= O
O
and, of course, transformations. The two are not indepdendent of one another.
6.5.1

Operators: Momentum and Energy

For any wave-mechanical system, the most important oeprator is the Hamiltonian, followed quickly by the momentum operator. As weve seen, the lowered
index form of this is:
p = i
(151)
where the zero component, naturally, corresponds to energy. For the u-modes,
the space-time dependent states can be written:
u(x) = ueip x

which quickly yields an eigen-value of p


With the v-modes, the exponent term is reversed, which means that the
eigenvalue is a bit tougher to interpret.
Things become clearer if we look at the equivalent of the expectation value:
v (s) (p)
p v (s) (p) = +2mp
normalized, and positive exactly the same as the us.
6.5.2

Symmetry Operation: Charge Conjugation

One of the most important types of operations that we can perform on states
are symmetry operations. The idea is that we will have very simple eigen-values
(-1 or 1, in most cases), or that an operation will exactly turn one particle into
another.
In QFT and fundamental physics generally, there are three very important
symmetry operations:

82

1. C Symmetry (Charge Conjugation) which essentially means that if we


turn all particles into antiparticles and vice-versa then all of the physics
of the universe will look identical.
2. P Symmetry (Parity) which means that if we reflect all vectors:
~v ~v
then physics will remain unchanged.
3. T Symmetry (Time Reversal) which means that if we reverse the arrow
of time, physics will remain inviolate.
In Gravity, E&M, and the Strong Force, all three of these symmetries are respected. In other words, if there is an operation, C which represents Charge
conjugation, then:
= H
int C
int
C 1 H
which can be achieved simply enough if C commutes with the Hamiltonian.
As I said, this symmetry does hold for three of the forces, but it does not
hold for the weak force. In the weak force, all neutrinos are left-handed, and
all anti-neutrinos are right-handed. Switching particles for anti-particles most
certainly is noticable.
Presumably (you might think), the comination of:
C P
is respected by the weak force. It is not. Good thing, too, since otherwise, we
wouldnt have matter-antimatter asymmetry, and thus we wouldnt exist.
Only:
C P T
Seems to be symmetric for all of the fundamental forces.
But what is the C operator? Simple inspection yields:
= i 2
C

(152)

We can quickly show that this gives the operations:


(r) (p) v (r) (p)
Cu
exactly as you might have hoped.
6.5.3

Symmetry Operation: Parity

Parity is even simpler. For the u states, it is clear that only the lower two terms
have spation contributions. Thus, for u(1) applying a negative two the lower
two terms yields a spatial reversal:

1
1

0
0

pz
pz
P E + m
= E + m

Exp~+my
Epx~ +my
p +ip
Ep
~ +m

p ip
Ep
~ +m

83

and similarly for u(2) . Thus, it seems reasonable that:


P = 0

(153)

Interesting things happen, especially when we consider a particle at rest. In


that case:
P u(s) (~
p = 0) = u(s) (~
p = 0)
The eigen-value of the parity operator is 1. u-states have an even symmetry.
On the other hand, its clear that for a v-particle at rest:
P v (s) (~
p = 0) = v (s) (~
p = 0)
Negative parity. Parity is not just a symmetry. Its a conserved quantity.
6.5.4

Operator: Spin

But now, lets consider the operator:


~ = 1
S
2

~
0

0
~

(154)

This is the spin operator, and if it looks crazy and unfamiliar, it should. Its
really 3 operators, and you can imagine really wanting to know, for example:
p

p = S
|~
p|

(155)

the operator yielding the spin along the direction of motion of our particle. Ive
made things easy in this case, since the particle is moving in the z-direction, we
get:

1
1 0 0 0

0
1 0 1 0 0

p =
pz

E+p
0 0 1 0
2
0 0 0 1
0

trivially. Thus, for particles moving in the +z direction, u(1) and v (2) have a
spin of 1/2, and u(2) and v (1) have a spin of -1/2.
This is not quite as strange is at it would seem. After all, much of the
convention for the v-particles is reversed. Lets consider v (1) :

0
z

p
Ep
~ +m
v (1) (p) = E + m

0
1

As weve already seen, the eigen-value of momentum of the v-states is:


p v = p v

The spin operator yields -1/2, dotted with the momentum operator which yields
another -1. The spin is along the direction of motion, just for the u(1) state.
84

6.5.5

Transform Operator: Boosts

Were finally ready to consider transformation operations. In order to get a


running start, lets discuss a simplified case, a 1+1 dimensional system.
A Simple 1+1 System
It is clear that in the Galilean limit, a small boost, in the x-direction
yields:


1

=
1
But it is clear that each frame is just a small Galilean boost from one another.
Thus we can re-write this as:
= ( +

1
K )N
N

which is just a shorthand way of saying that we perform the boosts many, many
times. The matrix, K is what we call a generating matrix, and is, by inspection:


0 1
K =
1 0
Of course, weve seen familiar functions to this before:
lim

Thus:

1+

x N
= ex
N

= exp (K)
Were going to do this a lot. We come up with a generating function that
ultimately reveals a symmetry or an operator.
This is, of course, a series:
1
1
= I + K + 2 I + 3 + ...
2
6
I recognize this! It is:
= cosh I + sinh K
or, written out:
=

cosh
sinh

sinh
cosh

This has the nice result that:


cosh2 sinh2 = 1
Not coincidentally, we also have:
2 (v)2 = 1
85

Thus, this is the ordinary Lorentz boost with:


= cosh
and
v = tanh
How Dirac Spinors Behave under Lorentz Transforms
Of course, we arent just boosting individual components of vectors. Rather,
we need to make sure that various terms in the Lagrangian will be preserved.
Weve already seen what energy density (the Hamiltonian) looks like. Thus, we
realize that were going to have to do a transform to preserve the quantity:

from one frame to another.
Lets suppose that we find an operator for a boost, which well call:

S()
We already know how transforms. It behaves essentially like the components
of a vector. Thus, we have the relation:

S1 S
or equivalently:

S 1 () S() =

Of course, the values of the matrices are the same in all frames. Also, note that
so far, all of my work is completely indepdendent from the sort of transformation
that Im looking to do.
As with the previous simple example, were going to assume that can be
approximated as a small perturbation, and then we can take an exponent.
Note that at this point, Im going to work the answer specifically for boosts,
rather than rotations, but the approach will be similar. I am assuming a boost
in the i-th direction, i , and I will write out my results in terms of Bi , an as-yet
unknown 4x4 matrix which will generate the boost:
S I + i Bi
so

S1 I i Bi

to 1st order.
Likewise, we already know that
I + i K i

86

Multiplying everything out we get:


S 1 S

[Bi , ]

= (1 i Bi ) (1 + i Bi ) = i [Bi , ]
= 1 + i K i

= Ki,

We know the boost matrices. In the z-direction, for instance, its simply:

0 0 0 1
0 0 0 0

K3 =
0 0 0 0
1 0 0 0
so in the z direction we have:

and


B3 , 0 = 3


B3 , 3 = 0

with all others zero. Together, these yield:

0 0
1 0 3 1
0 0
B3 = =
2
2 1 0
0 1

1
0
0
0

or more compactly:

1
1
i Bi =
2
2

0
i

i
0

0
1

0
0


(156)

Thus, a boost may be expressed as:


S = exp

1 i
i
2

where
2i = I
so the series looks nearly identical to what we had in the 1+1 universe case:

 

I cosh 2i  i sinh 2i 
S=
(157)
i sinh 2i
I cosh 2i
Simple comparison from boosting the u(1) state from rest yields:
  r
1+
i
=
cosh
2
2

Its not simple, but it is algebraically solvable.


87

6.5.6

Transform Operator: Rotations

We can do virtually the same thing with rotations.


A Simple 2-d system
As before, lets just consider the ordinary rotation matrix. This is nearly
identical to the case with boosts, except:


1
R
1
Thus, the rotation generator is:
J

0 1
1 0

and thus, the full form of the rotation operator is:


R = exp (J)
Note that
J 2 = I
so, this forms a series:
R = I + J
which combines to:
R=

3
2
I J + ...
2
6

cos
sin

sin
cos

as Im sure you knew.


How Dirac Spinors Behave under Rotations
To do rotations on spinors, were basically going to repeat our work on
boosts. To simplify things, Im only to going to solve for rotations around the
z-axis, though the result will generalize easily.
Using an exactly analagous argument to boosts, I claim that that transformation operator will be:
S I + i Ri
Again, using an exactly analogous argument to before, we get:

[Ri , ] = Ji,

where, for a rotation around the z-axis:


J12 = 1 ; J21 = 1
and all other values are zero. Combining, we get:


R3 , 1 = 2
88

and



R3 , 2 = 1

This negative sign is going to be trouble. To help us out, Im going to introduce


another important gamma matrix, normally called, 5


0 I
5
0 1 2 3
= i =
I 0
My claim is that:

i
Ri = 5 i
2
where the matrices were defined above:


0 i
i =
i 0
This does, indeed, satisfy the commutation relationships above (you can
check!)
We now have a rotation generator! Multiplying it out yields:


0
~ = i 5 3 = i
(158)
R
0
2
2
What could be simpler!
Thus, the rotation operator yields:


3

S() = exp i 3
2
where Ive been lazy and referred to as the 4x4 block diagonal matrices of
Pauli spin matrices. Multiplying this out yields:
 2
 2
3
1 3
1 3

S(3 ) = I i 3 I
I +i
3 ...
2
2 2
6 2
or, more simply:

i ) = cos(i /2)I i sin(i /2)i


S(

(159)

As written, this will work for any direction. Multiplying it out explictly for z,
we get:

0
0
0
ei 2

0
ei 2
0
0
3) =
(160)
S(

0
0 ei 2
0

0
0
0
ei 2
This is a very exciting result! Both sets of terms produce a 1 for rotations
of = 2. In other words, spin a fermion around a full rotation, and your
wave-function reverses. You need to spin it around twice to get back to where
you started.
89

6.6

The Dirac Lagrangian

Now that weve gone through some of the basic manipulations on Dirac particles
(and more later, as it becomes necessary), we need to start turning this into the
notation that were accustomed to.
Ill simply give you the free-field Lagrangian for a Dirac particle:
L = i m

(161)

which you can test (using and as separate fields) becomes the Dirac equation.
First, a note on dimensionality. Since:
[m] = [E]
we immediately get:
3/2

[] = [E]

(162)

which may not have been what you were expecting.


As for the Lagrangian itself, lets just show that it works:
L
= i
( )
so

d
dx

and

L
( )

= i

L
= m

which combine to yield:


(i + m) = 0

(163)

a similar exercise quickly yields:


(i m) = 0
Likewise, now that we know the Lagrangian, we can immediately compute
the stress-energy tensor:
(164)
T = i
You can also use this to show that the current weve used above is indeed the
Noether current found from phase invariance.

90

6.7

Quantizing the Dirac Field

Now that we know the free field solution to the Dirac equation, and further, now
that weve gotten our propagators, were immediately in a position to write down
the quantized field:
i
X Z d3 p
1 h(s) (s)
(s) (s)
i~
p~
x
i~
p~
x

p
(~x) =
b
u
(p)e
+
c

v
(p)e
p
~
(2)3 2Ep~ p~
s
Z
i
X
1 h(s) (s)
d3 p
(s) (s)
i~
p~
x
i~
p~
x
p
(~x) =
b
u
(p)e
+
c

v
(p)e
p
~
p
~
(2)3 2Ep~
s

where s are the two possible spin states, and b creates u particles and c creates
v particles.
6.7.1

The Hamiltonian: Part 1

Lets continue with this line of reasoning (without yet using any of our commutation relations). Using our familiar stress-energy tensor, we can write down a
Hamiltonian:
Z

i
= d3 x(i
i + m)
(165)
H
Through a number of beautiful relations (which are done in 5.1 of Tong), the
Hamiltonian can simplify to:
Z
i
h
d3 p
(s) (s)
(s) (s)
=
H
Ep~ bp~ bp~ cp~ cp~
3
(2)
Z
h
i
d3 p
(s) (s)
(s) (s)
=
H
Ep~ bp~ bp~ cp~ cp~ + (2)3 (0)
3
(2)

The last term is the same infinity weve encountered before. We will (as always)
simply choose to ignore it. However, weve got bigger fish to fry. The big
problem here is that we presumably can make energy small by introducing lots
of anti-particles. In fact, we can imagine this as an infinite reservoir of energy.
This is a real problem.
6.7.2

Anti-Commutator Relations

To rescue this, we need to introduce anti-commutator relations. In particular,


we need:
x), (~y )} = (~x ~y)
{(~
(166)
and where other anti-commutators equal zero. These lead to the relations:
(r) (s)
{bp~ , bq~ }

= (2)3 rs (~
p ~q)

(167)

{
cp~ , cq~ }

= (2)3 rs (~
p ~q)

(168)

(r)

(s)

and all of the others vanish.


91

6.7.3

The Hamiltonian: Part 2

Using the anti-commutator relation, we now can recast the Hamiltonian above.
Noting:
cp~ cp~ + cp~ cp~ = (2)3 (0)
We now get a much more satisfying Hamiltonian (with only 1 embarrassing
infinity which has the opposite signs of the one weve seen before!):
Z
h
i
d3 p
(s) (s)
(s) (s)
3
=
H
b
b
+
p
~

c
c

(2)
(0)
(169)
E
p
~
p
~
p
~
p
~
(2)3

6.8

Fermi-Dirac Statistics

Ive been a little sloppy when listing which terms vanish and which dont, so
a quick summary of a few relevant commutator and anti-commutator relations
are in order. For our theory (real-valued scalar field), we had the following:
[
ap~ , a
q~ ] = (2)3 (~
p ~q)
but more relevant to this discussion, we had:
q~ ] = 0
[
ap~ , a
which means that:

p~
a
p~ a
q~ = a
q~ a

and thus, that:


|~p, ~qi = |~q, p~i
In our Fermionic fields, however, we have an anti-commutator relation (arising from the need for anti-particles to contribute a positive energy), and since
most of them vanish, including:
{bp~, bq } = 0
(Ignoring the spin), and we get:
b b = b b
~
q
~ p
~
p
~ q
and thus:
|~
p, ~qi = |~q, ~pi
This is the origin of the Fermi-Dirac statistical relations youve seen in your QM
class.

92

6.9

The Fermi Propagator

Finally, it is clear that were going to have to define a propagator, something of


the form:

(y)|0i
h0|(x)
or similar.
You may recall that the Feynman propagator could be expressed as:

F (x y) = [(x),
(y)]
and were going to use our intuition to suppose that the anti-commutators are
going to be the way to go in this case. We define the fermionic propagator as:
o
n

(y)
iS = (x),
Note that this is an outer product, not an inner one. The propagator is a 4 4
matrix, not a number.
Multiplying this out (as Tong does), we get:
iS(x y) = (i/x + m)(D(x y) D(y x))
where D is the one-sided propagator weve seen before. Building on what weve
seen before (in k-space), we get:

(y)|0i
=
SF (x y) = h0|T(x)

93

d4 p ip(xy) i(p
/ + m)
e
4
2
(2)
p m2

(170)

Quantum Electrodynamics

Gross 2.2-2.6, 10
Tong: 6

7.1

Gauge Transformations and Symmetries

Now that were working in the realm of the real universe, we need to start
thinking about how to generate real Lagrangians. Real, as in, shows up in the
actual universe. In order to understand which Lagrangians are realizable, we
need to introduce the idea of Gauge symmetries.
Simply put, a gauge transformation is something that you can do to a field
which doesnt change anything observable. For example, consider the following
relationships from E&M:
~ = A
~
E
~ =A
~
B

~ are the vector and scalar fields, respectively.


where and A
Remember that all we ever care about in E&M are the E&M fields. The
potentials are just helper functions. Now, what happens if we make the following
transformation:
+
~ A
~
A
where is an arbitrary function of space and time.
~ field is unchanged. Likewise, it
Since the curl of a gradient is zero, the B
~
should be clear that this transformation creates two canceling terms in the E
field, which subsequently remains unchanged. These are the important fields,
of course, because they are the ones that appear in the Lorentz force law.
This is an example of a gauge transformation, and we can use it to put
various constraints on the potential fields in a particular system.
In field theory, a gauge symmetry is assumed (and then experimentally verified) for each of the fundamental forces. A Lagrangian is then found which
satisfies the gauge symmetry. And lo and behold! With the exception of the
strength of the coupling term (which has to be added by hand) virtually all of
the the theory of the force comes out automatically.

7.2

U(1) Gauge Symmetry

As an example (and a way of developing Quantum Electrodynamics) were going


to work out the implications of a particular Gauge Invariance: U (1). Im going
to simplify things, for now, and work out electrodynamic theory for a charged

94

scalar field. Also, until further notice were just doing relativistic classical field
theory. Ill let you know when its time to quantize things again.
So, U (1)... Weve already seen the global version of this with the transformation:
exp(i)

where is a real constant angle. Provided we also apply the transformation:


exp(i)

Suppose we know nothing about the Lagrangian except that a transformation


of this form leaves the action of a system unchanged? Well, weve already seen
that global U (1) gauge invariance results in the Noether current:
j = i [ ]
Now, and heres the tricky part, what if we assume that there is invariance
(so-called gauge invariance) not only globally, but locally? There is no obvious
reason (except for this is apparently how the universe works) that this should be
the case. The idea isnt that our fields dont change at all, but that by applying
such a transformation, our Lagrangian, and the E-L solutions to the Lagrangian
still make sense.
Applying our local gauge transformation, we get:
exp(i(x))

(171)

and similarly, with a + sign for .


Now things are not so simple. Lets look at the Lagrangian:
L = m2
The term:
No problem. However...

m2 m2
i (x)

Oh no! The U (1) Gauge transformation, introduced a new set of terms into the
Lagrangian!
Now it is not obvious to me (or anyone else, incidentally) why local gauge
invariance should exist, but it seems to be a governing property of our universe.
The U (1) transformation (the simplest one) is the generating symmetry for
E&M, SU (2) is the one for the weak field, and SU (3) for the strong.
The point is that we can force the Lagrangian to be conserved if we replace
the derivative with:
D = + i (x)

Note that this doesnt mean that nothing changes if we locally change phases
just that doing so is allowed, but we need to adjust our Lagrangian accordingly
so that any possible phase distribution works in our Lagrangian.
95

Doing so necessarily adds 2 sets of terms. The first is:


Lint = ig (x) [ ]
where Ive liberally expanded out my , and noted that we get a minus sign
in the Gauge term for the part of the expression, and thrown an arbitrary
coupling constant in front for good measure.
I can distribute my units out however I like, so I define:
A (x) =

(172)

I may as well ruin the surprise now and say that this is going to be the electromagnetic 4-vector potential:

Ax

(173)
A =
Ay
Az
This very tidily yields a nice interaction term in our Lagrangian:
Lint = gA J

(174)

It should be noted that the coupling term, g, is usually swallowed up into the
definition of current and is, essentially, the electric charge, e. For now were
going to leave it out front.

7.3

The vector potential

And what of this term, A? Weve already established that its the 4-vector
potential from E&M.
However, we have a big problem. We dont yet know what a free E&M
field looks like. Consider our total Lagrangian (so far):
L = L,f ree gA j
Can we just leave things like this?
No.
Our rule for gauge invariance is that we need to be able to apply a gauge
transformation to all of our fields, and find that the Lagrangian is unchanged.
However, we now have two transforms:

ei(x)

A +

where the second came from the fact that at first we didnt have any A field
at all, and it was our gauge transformation that gave rise to it in the first place.
96

This has some rather profound implications. First, it is clear that the free
A Lagrangian must also appear in our total Lagrangian, and that the entire
thing must be locally gauge invariant. Further, it means that terms like:
M 2 A A
cant appear in the final Lagrangian (meaning that the final term from our
original gauge transformation must somehow cancel). Why not? Because if we
apply a gauge transformation then:
M 2 A A + 2M 2 A + M 2
which is manifestly not gauge invariant. So the free-field Lagrangian for our A
field cant have that term. This means that our A field represents a massless
particle. Remember that the 2 term represented the mass term!
Likewise, if we imagine 2nd-order first derivative terms, they can be broken
down into a symmetric and anti-symmetric component:
G = A + A
Alas, this term cant appear in the free-field Lagrangian because it transforms
under a local gauge transformation as:
G G + 2
Again, not gauge invariant.
The only term that is is the anti-symmetric combination:
F A A

(175)

Thus, our Lagrangian can be expressed as:


L = Lf ree

gJ A + Const F F

(176)

Since the normalization doesnt matter, in order to meet with normal convention, we take the free-field Lagrangian for the E&M field to be:
1
LEM = F F
4

7.4

(177)

The 4-Potential and the Field

Im going to drop the bombshell. F is an anti-symmetric 4 4 tensor, which


means that the diagonal components are all zero (by definition). Also, the top
six components are just the negative of the bottom 6. In other words, there are
only 6 independent numbers. What are they?
Well, first, Im going to let you in on a secret. That 4-vector, A ? Were
going to define it as the electromagnetic potential:

Ax

(178)
A =
Ay
Az
97

and similarly:
A = (, Ax , Ay , Az )

(179)

So consider, for example:


F01 = A x x = Ex
Its just the electric field! We have similar relations for F02 & F03 , but we wont
derive them.
Likewise,
F12 = x Ay + y Ax
~
which is the z-component of A
Fleshing it all out:

0
Ex
Ex
0
F =
Ey Bz
Ez By

Ey
Bz
0
Bx

Ez
By

Bz
0

For what its worth, if we raise the indices, we get:

0 Ex Ey Ez

E
0
Bz By
x
F =
Ey Bz
0
Bz
Ez By Bx
0

(180)

(181)

Knowing nothing else about these fields except how they are generated, I
note:
F + F + F = 0
(182)
This is known as a Bianchi Identity and similar expressions end up being very
powerful in GR.
You can check this if you like, but the first term simply produces:
A A
and all 6 final terms end up canceling algebraically.
This looks like 64 different identities (4 values each for , , ), but in reality,
its far fewer, since permutations dont matter.
= = = 0 yields a trivial result. Zeroes all around.
= 0, = 0, = 1 yields:
E x E x = 0
which it does, of course. In other words, the only interesting identities are those
for which all three indices are different.
= 0, = 1, = 2 yields:
y Ex + B z + x (Ey ) = 0
98

Re-arranging, and this is simply the z component of:


~ = B
~
E

(183)

The free-field version of Faradays law of inductance. The other two permutations with = 0 yield the other two components.
Likewise, = 1, = 2, = 3 yields:
z Bz + x Bx + y By = 0
or in the more familiar form:

~ =0
B

(184)

Gausss law for magnetism.

7.5

The Dynamics of the Free-Field Potential

What do we learn from the free-field Lagrangian? Well, clearly we only have
dynamic terms, and thus we care about:
L
( A )
The easiest thing to note is that for any (asymmetric) combination of &
, There are eight relevant terms in the Lagrangian, 4 from F F , and 4
from F F . Thus, taking the derivative (and not worrying about signs), we
get 8 terms, half of which have a A , and half of which have a A , or 4
combinations of F . Thus:
L
= F
( A )
Or, in the free field:
F = 0
For = 0, this yields:
and for = i, we get:

(185)

~ =0
E
~ +E
~ = 0
B

Of course, not all fields are free. We can imagine what happens when we
add a current, or equivalently, when we look at the interaction term. In that
case, the RHS of the Euler-Lagrange equation:
L
= A J A = gj
A
or
F = gj
99

By construction:
F = 0
so we have
j = 0
as, of course, we must for any good conserved current.
Plugging in everything, we get our final two Maxwell equations:

and

~ =
E

(186)

~ =E
~ + J~
B

(187)

So from U (1) gauge invariance, we could, if we wished, derive all of classical


electromagnetism.

7.6

Lorentz and Coulomb Gauge

We started this discussion by noting that the free-field electromagnetic potential


has a very nice invariance, and we proceeded to show that transformations of
the form:
A A +
can be applied arbitrarily to yield the same physical observables.
7.6.1

Lorentz Gauge

This gives us an enormous amount of freedom. For example, for an arbitrary


A field, I can imagine adding a gauge field such that:
A +  = 0
Basically, this is just a matter of solving the Poisson equation for the gauge
field, .
This means that we can arbitrarily find a solution for A (for a time and
~ &B
~ field) such that:
spatially varying E
A = 0
This is known as Lorentz Gauge.
7.6.2

Coulomb Gauge

We can take this a step further, but at the cost of breaking Lorentz invariance.
By breaking up the Lorentz condition into two parts, we can solve:
~=0
A
which equivalently yields

A 0 = 0
100

and thus we can arbitrarily set:


A0 = 0
for free fields.

7.7

Solution to the Classical Free Electromagnetic Field

You know what comes next. We have our free-field Lagrangian for photons.
Lets solve it! Applying the Euler-Lagrange equations, we immediately get:
F = 0

(188)

or equivalently:
A A = 0
Heres where Coulomb vs. Lorentz gauge comes into play. Were going to go
with Lorentz, as it makes things much simpler, and the 2nd term drops out
entirely. Thus, we get (in Lorentz gauge):
A = 0

(189)

I totally know how to solve this! This is just 4 copies of the massless KleinGordan equation. But those solutions arent entirely independent. After all,
if I solve 3 of them the Lorentz gauge condition clearly allows me to solve the
others.
Im going to assume the form:
Z
i
h
1
d3 p
(r)
(r)
p
(190)
A (x) =
(r)
p) ap~ eipx + ap~ eipx
(~
3
(2)
2Ep~

Where (r) are each of 4 4-vectors representing the polarization. And where:
Ep~ = |~
p|

(191)

because, after all, we have a massless particle.


Finally, note that there is an implicitly sum over 0-3 for r. Yes, I realize
that this is contrary to the normal convention (because both rs are upstairs),
but I dont care.
For simplicity, lets consider a radiation wave propagating in the +k direction. In that case, the Lorentz condition becomes:
A 0 z Az = 0
or equivalently:
X
r

(r)

(r)

0 Ep~ 3 pz = 0

101

It is clear that there are two independent polarization modes which satisfy
the Lorentz condition:

0

(2) =
1
0

and (1) which is defined similarly.


More generally, for any momentum, there will be two polarization modes
defined perpendicular to the direction of motion. More generally, we have:
(r) (p) p = 0

(192)

(r) (s) = rs

(193)

for the transverse modes, and

generally.
Note that even though only the r=1 & 2 modes make any obvious physical
sense. The particular components vary based on the direction that the momentum of the mode is pointing. However, for now, well keep all 4 of the modes in
mind.
Besides describing the field generally, we can also write down some important
quantities, like the energy density. Ill leave it as an exercise to show that, in
terms of what weve already defined, we get:
Z


1
~ E
~ +B
~ B
~
d3 x E
E=
(194)
2
exactly as you learned when you were knee-high to a grasshopper.

7.8
7.8.1

Quantizing the Photon Field


The A operator

Now that weve written the free-field photon expansion, its trivial for us to
quantize the electromagnetic field. Namely, we have:
Z
i
h
1
d3 p
(r) ipx
(r) ipx
(r)
p
(195)

(~
p
)
a

e
+
a

e
A (x) =
p
~
p
~
(2)3 2Ep~

Of course, as we saw in our discussion of fermionic fields, it isnt quite sufficient to just write this down. After all, were going to need a few rules for
commutators and anti-commutators.
Before, we noted that the Lorentz condition, A = 0 meant that only two
physical polarizations were going to be relevant the spacelike ones perpendicular to the direction of motion. But how do we impose this? Essentially, we
must demand that for any good state, |ii, we have:
A+
|ii = 0
102

where the A+
operator is the part of the photon operator with (confusingly)
the annihilation term. This further insures that:
hf | A |ii
for any good state. This means that by construction, for a mode moving in the
k direction, we get:
(3)
(0)
(
ap~ a
p~ )|ii = 0
(and we get a similar relationship for modes in any other direction).
7.8.2

The Hamiltonian

This is good because Ive been holding off on writing down the commutation
relations:
(r) (s)
[
ap~ , a
q~ ] = rs (2)3 (~
p ~q)
(196)
which is fine (if a bit weird), until we write out the Hamiltonian:
Z


d3 p
(i) (i)
(0) (0)

H=
a

E
p
~
p
~
p
~
p
~
p
~
(2)3

(197)

where there is an implicit sum in the space-like direction/ This looks worse than
it is. After all, its clear that for any particular mode (say in the z-direction),
the 0 and 3 terms exactly cancel, so we get a positive definite contribution.
7.8.3

The Photon Propagator

As weve already seen, we need to compute the overall photon propagator. The
general form is exactly what youd expect. Indeed, the whole reason that I
choose to do this analysis in Lorentz gauge is that the propagator has a particularly simple form compared to Coulomb gauge. See the Tong notes if you
dont believe me. In short, we get:
Z
d4 p i ip(xy)
e
(198)
h0|TA (x)A (y)|0i =
(2)4 p2
7.8.4

EM Interaction Term

Weve already seen that the general form of the EM interaction term in the
Lagrangian is:
LInt = gj A
We now have a quantized version of both the electron current and the photon
field. We thus get the interaction term:
A
HInt = e

where I threw in the electron charge for good measure.


103

(199)

7.9

Deriving the Feynman Rules

Im not actually going to define the Feynman rules for QED, but I will motivate
them. Consider the interaction Hamiltonian above. Expanding everything out,
we get:
"Z
#
Z


3
d
p
1

3
(s)
i~
p
~
x
(s)
i~
p
~
x
b u (p)e
Int = e d x
p
H
+ cp~ v (p)e
(2)3 2Ep~ p~

"Z


1  (r)
d3 q
(r)
i~
q ~
x
i~
q ~
x
p
b
u
(q)e
+
c

v
(q)e
q
~
q
~
(2)3 2Eq~
#
"Z


1
d3 p
~
~

p
~k eik~x + a
~ eik~x

(t) a
k
(2)3 2E~k

Looks pretty complicated, right? Well, it is. But suppose we are looking at a
particular vertex in a Feynman diagram. Say, for example, that were looking
at one in which an electron is annihilated (b), and a new electron is created (b )
along with a photon (
a )?
We get a bunch of contributions. The integral over space and the exponents
give us a delta function of the form:
(p + k q)
In addition (and forgetting about factors of 2 and integrations), we get a bunch
of terms which look like:

b u(s) (p) bq~u(r) (q)(t) a


~
p
~
k

or, collecting terms more reasonably, we get:


~ (t)
[u(s) (p) u(r) (q)]bp~bq~a

Well see how these play out in the Feynman rules, but I think youll find the
rules more sensible at this point.

7.10

QED Rules

It sure does.
But now were in a position to actually write down the Feynman rules for
electrodynamics:
1. Labels: Label the incoming momentum and energies p1 ...pn , and correspondingly, the spins s1 ...sn . Similarly, label the internal lines, k1 ..kn ,
and r1 ..rn .
2. External Lines: Were making an integral (to compute the scattering amplitude). For each:
104

Outgoing electron: u(s) (p)

Incoming electron: u(s) (p)

Outgoing positron: v (s) (p)

Incoming positron: v (s) (p)

Outgoing photon:
Incoming photon:

In reality, the index (especially on the photons) will be related to the


corresponding index at the related vertex.
3. Vertex Factors: Each vertex gets a factor of:
X
ig (2)4 (
p)

where the index, relates to the photon coming out of it, and the sum is
done in the usual way. In order to simplify later calculations, you should
put the factor between the incoming and outgoing fermions in the form:
[u u]

4. Propagators: Weve already written these. They are:


Electrons and positrons:

i( k + m)
k 2 m2

Photons:

i
k2

5. Integrate: We get a factor of


Z

d4 k
(2)4

for each internal line.


6. Cancel the Delta Function:
P
Remove the factor of (2)4 ( p), and were left with iAf i .

7. Antisymmetrization: Include a minus sign between diagrams that differ


only in the interchange of two incoming or outgoing electrons or positrons
or of an incoming electron with an outgoing positron (or vice-versa).
Note: These factors look fairly complex, but an easy way to keep all of the
terms contracted with their appropriate partners is to follow each fermionic line
backwards.

105

7.11

Example: Electron-Electron Scattering

Maybe its not so obvious so far. Lets clear things up a bit by doing some
examples. Lets begin with an obvious example: electron scattering.
First, lets draw the requisite Feynman diagrams and do step 1 of our Feynman calculus:
e

e
p1 , s1
p2 , s2
, e k

p 2 , s2

p 1 , s1
e

Yes. I really am too lazy to use proper subscripts. Note that this is one of
two possible diagrams. The other has p3 , s3 twisted with p4 , s4 (and so clearly
contributes a minus sign). Im going to write everything in shorthand. So, u(1),
(s )
really means up~11 .
Step 2& 3:
g 2 (2)8 [u(3) u(1)][u(4) u(2)](p3 + k p1 )(p4 k p2 )
Step 4:
We have 1 propagator and its a photon. Note that there are two indices.
Thats good because the two vertices on either end each have an index. After
including it, we have:
ig 2 (2)8 [u(3) u(1)][u(4) u(2)](p3 + k p1 )(p4 k p2 )

k2

Step 5:
We have one internal line, so we do one integration:
Z
d4 k

2
8
ig (2)
[u(3) u(1)][u(4) u(2)](p3 + k p1 )(p4 k p2 ) 2
(2)4
k

ig 2 (2)4 [u(3) u(1)][u(4) u(2)](p3 + p4 p1 p2 )


(p1 p3 )2
1
ig 2 (2)4 [u(3) u(1)][u(4) u(2)](p3 + p4 p1 p2 )
(p1 p3 )2
Whats this? Well, as you may recall (or if you wish to quickly prove), we can
square any of the matrices such that:
=
106

or, in other words, we can treat the matrices themselves as components of a


vector, and use the standard index raising and lowering operations, exactly as
weve seen, such that:
=
Step 6:
For the diagram weve drawn, we get:
Af i =

g2
[u(3) u(1)][u(4) u(2)]
(p4 p2 )2

Step 7:
Adding in the other diagram, we get:
Af i =

g2
g2

u(1)][u(4)
u(2)]+
[u(3)
[u(4) u(1)][u(3) u(2)]

(p1 p3 )2
(p1 p4 )2

Give me all of the spins and momenta, and Ill give you a number!
Of course, in reality, we may want to simplify this. We may want, for
example, to assume that we know nothing of spin and simply average over all
possible spin states. Well do more of this in a while, but for now, its important
to note that were capable of computing the amplitudes. From here, its mostly
a lot of algebra.

7.12

Example: Electron-Positron Annihilation

As a final example, lets consider electron-positron annihilation. In that case,


the Feynman diagram looks like:

p 3 , s3
p 4 , s4
k, r
p 2 , s2

p 1 , s1
e

e+

This is different from the previous example in two ways. First, the outgoing
particles are both photons (bosons), which means that the second diagram, the
one with the momenta switched, get a +1. Secondly, the mediator particle in
this case is an electron, which means that well have a different propagator. So,
lets follow our Feynman rules.
Step 2& 3:

107

g 2 (2)8 [v(2) (4) (3)u(1)] (p4 p2 k)(p3 p1 + k)


= g 2 (2)8 [v(2)/(4)/(3)u(1)](p4 p2 k)(p3 p1 + k)
Step 4:


k/ + m
ig (2) v(2)/(4)(p4 p2 k)(p3 p1 + k) 2
/(3)u(1)
k m2
2

Step 5:

k/ + m
ig (2)
d k v(2)/(4)(p4 p2 k)(p3 p1 + k) 2
/(3)u(1)
k m2
h


i
1
v(2)/(4) p
= ig 2 (2)4 (p4 + p3 p1 p2 )
/1 p
/3 + m /(3)u(1)
2
2
(p1 p3 ) m
2

Step 6:
We get (for this one diagram):
Af i =



i
h
g2
v(2)/

(4)
p

p
+
m

(3)u(1)
/
/
/
1
3
(p1 p3 )2 m2

Step 7: Switching 3 4 gives us a plus sign.


7.12.1

Simplifying the Annihilation

Lets make it simple and consider an electron and positron at rest. What happens? Assuming that the photons are created along the z-axis, we get:
p~3 = (0, 0, m)
and the opposite for p~4 . This converts the denominator term to:
(p1 p3 )2 m2 = 2m2
and an identical term for p1 , p4 .
What about the rest? Its important to remember that p1 (for example) and
4 (for example) are each just ordinary 4-vectors representing, respectively, the
incoming momentum and energy of the electron, and the polarization of one of
the outgoing photons.
There are many rules for combining, contracting, and re-ordering terms in
Feynmans slash notation. A few of the more useful ones (which you can verify
using dummy vectors) are:
a
//b + /ba
/

108

2a b

Since by our Lorentz condition, only the spacelike terms matter in the polarization states (the transverse modes only), the dot product p1 3 vanishes. Even
more generally, p3 3 = 0, so:


p1 /
p3 + m /3 u1 = /3 (/p1 + /p3 + m)u(1)
/
But wait! The Dirac equation, itself, helps us simply this:
(p1 m)u(1) = 0
So
/3 (/p1 + /p3 + m)u(1) = /3 /p3 u(1)
Thus, we get the marginally simpler looking expression:
Af i =

i
h
g2
v(2)

p
/
/
4 3 /3 u(1)
2m2

Or, combining the other term:


Af i =

i
h
g2
v(2) /4 /3 p
/3 + /3 /4 p
/4 u(1)
2
2m

(200)

Weve set p~3 and ~


p4 , which means that we can evaluate /p3 and /p4 direction.
These yield:
0
3
/p3 = m( )
0
3
p
/4 = m( + )

and so our bracket expression reduces to:


m {/4 , /3 } 0 [/4 , /3 ] 3

Weve already seen that the anti-commutator is simply 24 3 .


What about the commutator term? Well, since were ignoring the timelike
component of polarization, we begin by computing / as:


0

/ = =

0
The commutation term can be thus written as:
[/4 , /3 ] = 2i(3 4 )
where ( is the array of spin matrices).
And thus:
Af i =



g2
v(2) 4 3 0 + i(3 4 ) 3 u(1)
m
109

(201)

This is as far as we can go without specifying the spin states of the electron
positron pair. But lets suppose were interested in the superposition state:
( )

2
which you may recognize as the singlet state.
The scattering amplitude is then also a superposition of the two states. For
example, for (the electron, arbitrarily, is the first arrow), we get:

1

u(1) = 2m
0
0
and

v(2) =

2m

0 1

Now we can compute:


v(2) 0 u(1) = 0

and (in a slightly more complicated term):


v 3 u(1) = 2mk
yielding:
A = 2ig 2(3 e4 )z

(202)

The direction comes into play because we specified that that photons are being
emitted in the z-direction. Naturally, the amplitude is maximized when the
polarization is normal to that direction for both photons.
It is straightforward to show that we get a negative of the same term for the
other term in the singlet, yielding:

Af i = 2 2ig 2 (3 4 )z
(203)
I will point out that since the system is spin zero, it must remain so, and thus
there are two combinations of photon polarization state which must
be written
as a superposition of one another. This yields a net effect of i 2 in our final
calculation. Thus:
Af i = 4g 2
(204)
In case youre wondering what all of this amounts two, Ill tell you. The
coupling constant, g is related to the fine structure constant via:

g = 4
(205)

After some very straightforward analysis, we get:


1  2
d
=
d
v m

as the cross-section for electron-positron annihilation.


110

(206)

7.13

Averaging over Spins

You may have noticed that its kind of a pain to directly specify the spin of
every particle in the system. Oftentimes, they are completely randomized. We
could, of course, figure out the cross section for every possible spin state to every
possible spin state and then simply average accordingly:
h|Af i |2 i = average over initial spins, sum over final spins
However, it would be easier if we could simply average beforehand.
Lets consider our electron scattering amplitude:
Af i =

g2
g2
[u(3) u(1)][u(4) u(2)]+
[u(4) u(1)][u(3) u(2)]
2
(p1 p3 )
(p1 p4 )2

Squaring, we get 3 terms, but all are of the approximately term:


|Af i |2 =

g4
[u(3) u(1)][u(4) u(2)][u(3) u(1)] [u(4) u(2)] +similar
(p1 p3 )2

The pre-factors arent so tough (and are, of course, spin independent anyway),
but the spinor terms are complicated. We can imagine combining the 1st & 3rd
terms with each other and the 2nd and 4th, and in each case, were going to get
a combination which looks like:
G [u(a)1 u(b)][u(a)2 u(b)]
where the is just a generic combination of different 4 4 matrices (one each
in covariant and contravariant form).
We first note that we can get rid of the complex conjugate with a transpose
of the second term.
G = [u(a)1 u(b)][u(b)2 u(a)]
Now, we sum over all possible spins of b (because while the ordering above is
important, the parentheses arent):
X
u(sb ) (pb )u(sb ) (pb ) = (/pb + mb )
sb

which an outer product calculation that youve actually done on your homework.
So:
X
XX
G=
u(a)1 (/pb + mb )2 u(a)
sa

sb

That middle stuff is just a well-defined (and spin independent) 4 4 matrix.

111

We can rewrite all of this as:


XX
sa

G =

sb

X
a

= Q

ij

ij

XX
sa

sb

[u(a)1 u(b)][u(a)2 u(b)]

u(sa ) (pa )Qu(sa ) (pa )


(

X
sa

(sa )

(pa )u

(sa )

(pa )

ji

= Q (p
/ + ma )ji
h
i
= T r 1 (/pb + mb )2 (/pa + mb )

The only complication now involves keeping track of all of the terms. There
are no spins left at all. Typically, since we want the average over all spins,
applying this trick involves a factor of 1/4 in front of the amplitude.

112

The Electroweak Model

Gross 13.1-13.2 15.4


Heres a neutron decay, just to show that I know how to draw it!
u

8.1

SU(2) Local Gauge Invariance

Weve seen the secret recipe for figuring out gauge fields (and fundamental
forces):
1. Assume that the Lagrangian for a particle is invariant under some sort of
local gauge transformation
2. Find the interaction Lagrangian which makes the cross terms go away.
3. Add the free-field Lagrangian for the (vector) mediator
This was such a success in E&M that well want to try it with the weak field.
So well imagine that we have some wave-function (could be a Dirac spinor, or
a complex field of some sort), on which we will apply the local gauge transformation:
(j)
j
ei (x)
(207)
This is an SU(2) transformation. SU(2) stands for special unitary which
basically means that all of the elements of the SU(2) group have the property
that they are all of the 2 2 unitary matrices (Hermitian = inverse) with a
determinant of 1. You can verify that for any arbitrary vector of angles, our
generating phase term satisfies this requirement.
If thats the case, then the wave-function must be a 2-component thing:


1
=
2
113

Each component of this object may itself be,


1 and 2 states represent different isospins.

 
e
or
e

say, a bispinor, but basically, the


Various doublets include:

d
u

On this transformation,

is clearly unchanged.
When we apply this transform, we get:
ei

(j)

(x)j

iei

(j)

(x)j

j (j)

Or more generally, we can imagine defining a new vector field which obeys the
transformation:
i
W(j) W(j) + (j)
g
and a derivative:
D ig
such that terms like:

W(j) j

D D
are invariant upon the gauge transformation. This is what we did in E&M, after
all.
But theres a problem. The exponent term doesnt actually commute with
the covariant derivative. We actually need to define the W field gauge transform
as:
1
(208)
W(j) W(j) (j) + jkl W(k) (l)
g
As a result, we find that the W field Lagrangian is still:
1 (i) (i)
LW = F
F
4

(209)

but with the crucial difference:


(i)
F
= W(i) W(i) gjkl W(k) W(l)

(210)

This doesnt look too bad until you realize a) The free Lagrangian has 4-th order
terms in W , and b) There are no 2nd order terms in W with no other terms.
What a) and b) mean, respectively, are that a) We have a massless W field
(and observationally, the Ws and Zs have mass) and b) its non-trivial to create
a free solution. In fact, though, we can ignore higher order W 3 and higher
terms at low energy. The solution would simply be the same as for the photons.

114

8.2

Spontaneous Symmetry Breaking

So far, were not doing so well. The theory predicts a massless mediator particle
for the weak force (although it does correctly predict three of them), because of
the cross terms, it clearly becomes complicated at high energies.
To make progress, were going to have to take a few steps back.
Lets imagine a scalar field with the Lagrangian:
L =
V () =

V ()
2 ||2 + ||4

(211)
(212)

Note that normally, is reserved for real valued scalar fields. Im using it
for a complex field in this case because the complex part will essentially be
swallowed by what follows. Note, too, the dimensionality of the terms in the
potential. has units of energy, and is dimensionless.
This potential is known as the Mexican Hat potential (for reasons that
become clear when you plot it). It is somewhat strange because the masslike
term (the one with the quadratic part of the Lagrangian) has the wrong sign.
Dont worry about how this Lagrangian arises. For now (and indefinitely,
frankly, since we still dont have an answer), it just is.
The ground state can be found at:
r
2
||GS = u
(213)
2
And at the ground state:

4
(214)
4
I know. Youre impatient to know what this has to do with the weak force.
So far, we just have a scalar field.
Classically, wed expect the scalar field to drop to its vacuum state. The
problem is that the vacuum state is a completely arbitrary value, since the field
can be at whatever phase it likes, each with equal probability:
Vmin =

h0||0i = uei
Note, that originally the system had a U (1) symmetry; it no longer does. This
is known as spontaneous symmetry breaking.
Why is this a big deal? Well, remember before, when we first derived electromagnetism? We started with a free field Lagrangian, and we made the substitution:
+ igA
and the interaction part came out naturally. More generally, we found:
1
L = ( + igA )( igA ) V () F F
4
where previously we used V () = m2 2 , but now we use the form above.
115

But now consider writing the field as a perturbation around the minimum:
= (u + R)ei
This is completely general. Working through everything, we get:
1
L = R R 22 R2 e2 u2 A A F F V (R)
4

(215)

From one field, we now have 2, each of which are massive, and where Ive
swallowed all terms not to second order into V (R).

8.3

Electroweak Theory (But without the handedness)

Now that weve got an idea of how spontaneous symmetry works, lets do it
for real. Well, almost. Im not going to include the handedness inherent in the
weak theory. The overall theory will work without it.
8.3.1

The Electroweak Fields at Rest

So lets first imagine that we have two types of particles, 1) A Higgs particle
(which, even though it will initially be a complex doublet, well still denote as
):


1
=
2
You may think that this means that the Higgs will be charged. You would
be wrong. We will generically assume that the Higgs interacts under some
potential, V (), which Ill specify later.
2) A Dirac particle.
Naively, wed expect the Lagrangian for the entire universe to look like:
L = V () + i m
But now suppose we allow a SU (2) U (1) local phase transition in both
fields:
"
!#
~

exp i g g
(216)
2
2
!#
"
~ ~

(217)
exp i g g
2
2
We get the factor of two in the coupling constant because there are two fields.
Further, the dimensionless prefactors, g and g are inserted because both fields
contribute to the final Lagrangian, and its only the combination of g g which
ends up mattering in the end. Likewise, since there are 3 Pauli matrices and 3
relevant angles, Im treating both as a 3-vector, and dotting them.

116

As defined then define the corresponding Gauge fields as transforming as:


1
(j)
A(i) A(i)
+ jkl A(k) (l)

g
and
B B +

1

g

(218)

(219)

This might look a bit confusing, but remember that these represent 4 vector
fields (all massless, as it turns out, at least initially):
LA,B

(i)
F
G

1 (i) (i) 1
F
F
G G
4
4
(i)
(k)
A(i)

gijk A(j)

A
B B

(220)
(221)
(222)

Even though B is the field generated by the U (1) transformation, you should
remember that it is not the photon.
Weve seen this sort of thing before (although perhaps in not so complex
a form). We get around all of this by redefining the derivative operators to
offset the gauge contribution:
D()

D()

g
~ + i g B
i ~ A
2
2

g
g
~ i B
i ~ A
2
2

(223)
(224)

From all of this we get the final (locally SU (2) U (1) gauge invariant Lagrangian):
1 (i) (i) 1
L = D() D() V ()+i D() m F
F
G G (225)
4
4
So far, so good.
Of course, multiplying it all out (including the D contributions), we get the
various interaction terms that weve come to know and love.
However, the Higgs has a rather strange form and appears to be charged.
Whats more, we have 4 massless vector fields, not simply 1. How to resolve
this?
8.3.2

Symmetry Breaking in Electroweak theory

In order to really understand the electroweak theory, we need to introduce an


explicit potential for the Higgs field:
V () = 2 + ( )2

(226)

The field will relax eventually, and clearly it can lie in any direction in phase
space, and will produce an equivalent contribution to the energy regardless of
117

which part of the doublet it occupies. Without a loss of generality, we can select
the angle such that:


0
h0||0i =
v/ 2
where

This is a convenient rotation and completely arbitrary, but will prove useful
later in the derivation.
We can now consider perturbations away from the minimum, such that:
!
0
=
(v+)

v=

where is the (single and real-valued) dynamical variable describing the Higgs
field around equilibrium. Because it only has this one degree of freedom, its a
real-valued single scalar particle at least in its simplest in its simplest incarnation. Because it hasnt actually been detected, it may be that the Higgs is a
bit more complex.
What happens when we expand the part of the Lagrangian in terms of v
and ?

v2  2 ~
1
~ + g 2 B B + 2gg B A(3) +Lint
g A A
L = L + 2 2 +LA,B
2
8
(227)
Plus additional terms that are either constant or higher order in (which arent
relevant at low energies) or constants (in v, for example), which dont show up
in the dynamics of the system.

Wow! Our Higgs has acquired a mass (= 2)! But thats not all.
8.3.3

The Higgs Mechanism

Were not going to talk about how the fermions gain mass, though presumably
its through a direct coupling term relating the Higgs to the fermionic field
which doesnt come into play until the Higgs relaxes into the true vacuum state.
However, the most important terms in the Lagrangian for now are the As and
Bs:

v2  2 ~ ~
LEW =
g A A + g 2 B B + 2gg B A(3)
8

Its pretty clear that the A(3) and B fields are coupled to one another in the
Lagrangian, and that if we are clever, we can redefine them such that:
  


B
A
cos W
sin W
(228)
=
Z
sin W cos W
A(3)
118

Where the Weinberg angle, W , is defined as


tan W =

g
g

(229)

Plugging this in to the Electroweak Lagrangian yields:




1
g2 v2

W+ W + + W W +
Z
Z
LEW =

8
cos2 W

(230)

This is awesome! In our new rotated state, weve found that 3 of the vector
fields are massive, and related such that:
gv
2

(231)

MW
cos W

(232)

Mw =
and
MZ =

The Weinberg angle must be determined experimentally, based on comparing


the following interactions:
e

e
e

Z0

e+
e

W
e

Experimentally, W 29 .
This is outstanding, because we immediately get other values of physical
interest. For example (and you have to follow through the algebra a bit):
e = g sin W

(233)

This can be found by either looking at the coupling of the current terms in
W to the new A field, or by looking at the coupling of the charged leptons
themselves. At any rate, from the measured unit charge,e = 0.303 in natural
units, we get
g = 0.65 ; g = 0.36
(234)
Of course, also find:

MZ
= 1.14
MW
119

(235)

an observed ratio to ridiculous accuracy.


It actually gets better. Consider the Neutron decay relation:
n p + e +
Fermi assumed that this relation was a 4-particle interaction (with no mediator)
of the form:


GF 
L = p (1 1.26 5)n e (1 5 )
2

where the 5 comes into play from the fact that neutrinos only occur in lefthanded combinations (to be discussed in the next section). The 1.26 comes from
the fact that neutrons and protons are composite particles (and the weak force
has a mediator). Clearly, the neutron decay rate should be proportional to G2F ,
and thus the term is well-determined. It is:
GF = (299.3 GeV )2
But the above interaction Lagrangian can be rewritten in terms of up and down
creation and annihilation operators. We may thus write:

g2 2
= 80.03 GeV
(236)
MW =
8GF
which is precisely the measured value.
Whats more, since
gv
MW =
2
we can solve for the electroweak symmetry breaking scale:
r
2
2MW
v=
=
= 246 GeV

g
But we can finish up by noting that:

MH = 2v = 347 GeV
which is why we have some idea what the energies are for the Higgs.

8.4

Handedness in the weak force

I deliberately wanted to leave off an observationally important detail when talking about symmetry breaking: the weak force has a preferred orientation. Lets
start by defining a new composite matrix:

0 0 1 0
0 0 0 1

(237)
5 i 0 1 2 3 =
1 0 0 0
0 1 0 0
120

This is a particularly handy object because we can break any bispinor into two
separate components:
1
L = (1 5 )
(238)
2
1
R = (1 + 5 )
(239)
2
As an exercise, you should try to show that a massless spin up particle moving
in the +z direction is right handed, and one moving in the opposite direction is
left handed.
The reason that this comes into play is that only left-handed multiplets obey
the SU (2) gauge invariance. Right-handed multiplets only obey the U (1) gauge
invariance:

R eig R
The implication of this is that in weak interactions, only left-handed neutrinos
are created, and contrarily right-handed anti-neutrinos. We do not know why
this is, and the result has to be put in by hand.
Of course, since neutrinos are massive, there is a Lorentz frame in which a
particular neutrino is right-handed. They simply arent created that way.

8.5

Quantized Weak Fields

Were not going to do any weak calculations in any detail, but at very least,
you should be able to do so. So, there are a few corrections to our Feynman
diagrams:
1. Because they are massive, W and Z particles have 3 possible polarization
states. Thus:
p = 0
exhausts the degrees of freedom. Otherwise, external Ws and Zs (which
dont exist, typically, as they are short lived), would look the same as
external photons.
2. The propagator for W and Z particles is:
i

q q /M 2
q2 M 2

as you might expect. However, because these particles are so massive, we


can typically ignore the q terms and simply use

i 2
M
3. Weak vertices get a factor of:
igW
(1 5 )
2 2
It is the appearance of the 1 5 which guarantees that were only including left-handed interactions.

121

Renormalization, Revisited

Gross: Chapters 11, 16


We have primarily focused on first-order (2 vertex) diagrams. We have found
that doing the integrals to compute the scattering amplitudes were generally
trivial (even if reducing those amplitudes to simple algebraic forms were not).
We have ignored higher order diagrams in part because the contributions typically scale as g n , where n is the number of vertices in a diagram and (in QED
at least), g << 1. However, weve seen that even for simple calculations,
like the vacuum energy density of QED or our theory, terms typically
approach infinity. We need a method of dealing with this, and that method will
be renormalizatin.
Consider the following diagram:
p

p
On the face of it, this diagram looks absurd. After all, we start with a
single electron and end with a single electron. Clearly, p = p , and there is no
scattering amplitude to compute. Humor me anyway.
Lets apply our Feynman rules to compute the scattering amplitude. We
have, of course, already done rule #1. Applying Rules # 2& 3, we get:

g 2 (2)8 u(s ) (p ) u(s) (p)(p k k )(k + k p)


We have two propagators, which combine (Rule #4) to yield:
g 2

k/ + m (s)
(2)8 (s )
u (p ) 2
u (p)(p k k )(k + k p)
k2
k m2

Rule # 5 gets rid of one of the functions, since k = p k.


So we get:
Z 4

d k
/p k/ + m
g 2
(p p)u(s ) (p )
u(s) (p)
2
k
(p k)2 m2
This means:
Af i = ig 2

1
(2)4

p
d4 k (s )
/ k/ + m
u (p )
u(s) (p)
2
k
(p k)2 m2
122

For large k this scales as k 3 , and thus k 3 d4 k diverges lograthmically with k.


We get an infinite contribution, in other words.
You might reasonably ask what physical interpretation Af i yields in this
case. Well, consider the electron viewed from its rest frame. In that case, we
have:
(t) = (0)eimt
The mass in the exponent represents the physical mass, the only one measurable by experiment. Thus, in our case, the scattering amplitude is essentially
the term resulting from the perturbation:
I = m
H
This is a perturbation on the self-energy of the electron. This, we really measure
the combination:
mphys = mbare + m
(240)
where: So lets consider a spin-up particle at rest, in which case:

1
0

u=N
0
0

and p0 = m, with all other components equal to zero. That simplifies things
considerably. In particular, it allows us to compute the total perturbation to
the energy as:

Z
d4 k 1 (/p k/) + m
(241)

m = e2

(2)4 k 2 (p k)2 m2
00
This simplifies to:
m = e

1
d4 k
(/p k/ + m)
4
2
2
(2) k (k0 2mk0 ~k 2 )

00

Consider the useful relations:


a
/


which reduces:

and thus:

2/
a
4

(p
/ k/) + m
= 2/p + 2/
k + 4m
(p k)2 m2



p k/) + m
(/
= 2p0 + 2k0 + 4m = 2(m + k0 )

(p k)2 m2
00
123

so
m = 2e

d4 k
m + k0
(2)4 k 2 (k02 2mk0 ~k 2 )

(242)

If we then do a coordinate transformation:


w=
we get:
m = 2me

k
m

1 + w0
d4 w
(2)4 w2 (w02 2w0 w
~ 2)

(243)

Recognizing that this is spherically symmetric (and defining |w|


~ = W , we get:
Z
Z

e2
m
W 2 (1 + w0 )
= 3
dW
dw0 2
m
2 0
(w0 W 2 )(w02 W 2 2w0 )

This integral blows up, and not just because of the 4 roots in the denominator.
If we integrate over w0 using a standard contour integral, we pick up a factor
of /4, and two of our terms drop out (and w0 W ), yielding (on the high W
side):
Z
e2
dW
m
=
2
m
8 0 W
If we put a simple truncation, this integral yields, ln(Wmax )
Or, to combine everything, we get:


m
kmax
= C ln
m
m
and multiplying our units out correctly, we get:


3 kmax
m = m
2
m

(244)

For an electron, even if we cutoff the mass at the Planck scale, this only produces
a 20% error. in m. Of course, it means that mbare 80% of m, since m is
the experimental mass.
We found that we needed to renormalize because the final integral was ultimately proportional to k 1 . This is bad thing, since were integrating to infinity.
However, it could be worse. We could find that the integral isnt normalizable
at all.
Our criterion is, take NB as the number of bosonic lines attached to a vertex
(for QED, this is 1), and NF as the number of fermionic lines attached to a
vertex. Because they have different factors of k in the propagator, they are
going to contribute differently. Computing
3
n = NB + NF
2
124

(245)

(4 for QED) tells us whether were going to encounter trouble. Theories with
n 4 are renormalizable, and those with n > 4 are not. As it happens, our
QED and weak theories to date lie just on the edge. Even the 4W vertices of
the weak theory just make it in under the line.
Equalivalently, if the energy dimensionality of the coupling term of the theory
is less than E <0 , then were likely going to be able to renormalize.
There are a number of tricks for renormalization in theory, and regularization
in practice. As we are out of time, I suggest you take a good look at Gross to
see how its done.

125

S-ar putea să vă placă și