Sunteți pe pagina 1din 26

Griffiths Ch.

1, 2, 3 1

A Quick Refresher on Quantum Mechanics


The Wave Function (Griffiths Ch 1)
• The Schrödinger Equation (1.1)
To begin our discussion, where else do we start than Schrödinger’s Equation! To get a feel for this, lets
imagine that we have a particle with mass m that moves along the x axis being pushed by some force
F(x,t). Classically we can apply Newton’s laws to determine momentum p=mv, position x(t), velocity ö,
acceleration, kinetic energy T = ½ mv2, etc.

By equating the force from Newton’s Law we find, .

Quantum approaches this differently, in that we are looking for the wave function Ψ(x,t), which is
obtained from SEQ (Schrödinger Equation).

• The Statistical Interpretation (1.2)


Now we can ask what actually is the wave function? Although there may be several interpretations based
on how you treat quantum mechanics we will proceed with Born’s statistical interpretation, which says
that:

Measurements
Griffiths Ch. 1, 2, 3 2

In this sense, the wave function is a complex value whose amplitude represents the probability of finding
the particle at a given point in space/time. In fact, experiments have shown that a quantum particle does
not have a precise position (e.g. its possible locations are spread across a probability distribution) until it
is measured, after which it is a definite position. Measurement collapses the wavefunction to a single
point, and subsequent measurements then return the same value.

• Probability (1.3)
Now because the wavefunction is a probability distribution, we need to refresh probability-speak.

For a set of data points N(j), the following definitions are used:

Discrete Continuous
Total Data Points

Probability of an Outcome

With With

Average Value (Expectation


Value)

Variance

• Normalization (1.4)
Other important feature of the wavefunction is that is must be able to be normalized. Because the square
of the wavefunction represents the probability a particle exists in this state, an integral overall all possible
states must return a unity probability.

This must be true all wavefunctions. In fact, if we have a value here where the probability is not unity (e.g.
>1 or <1) we must include a leading coefficient to correct. This is called normalizing. If the integral is
infinite, or zero, such wavefunctions are simply not possible solutions of SEQ.

An important note is that the SEQ preserves normalization in time. It can be shown that for a time varying
wavefunction, the normalization procedure becomes:

Such that once the function is normalized at some time, the integral then becomes 1, which has a time
derivative of zero. Thus, it always stays normalized (e.g. SEQ does not modify the normalization).
Griffiths Ch. 1, 2, 3 3

• Momentum (1.5)
One of the most important quantities in optics and in quantum is the momentum of a particle. In this case,
we can calculate the momentum of a particle through the process of determining expectation values. For
example, the expectation value for position:

What this statement means is the average location of a particle if measured for a large number of identical
states/systems. Remember, repeated measurements collapse the wavefunction so this representation
cannot describe a single particle!

We may also ask how this average position moves in time:

We can use integration by parts on this and can arrive at:

This quantity then represents the velocity expectation value. Since we cannot have a definite position, we
cannot have a definite velocity, thus velocity is also a statistical quantity. This represents the probability
to find a particle with a given velocity. In this case, the momentum expectation can be found by
multiplying by the mass, such that:
Position Operator

or

Momentum Operator
In this case what we have done is to illustrate an operator. These can be sandwiched between the
wavefunction and its c.c., and integrate to find the value that is of interest. Think of this as a more complex
expression of X + Y, where + is an operator, designating us to do an operation and achieve a certain
expected outcome. We can define other operators such as kinetic energy depending upon what our goals
are.
Griffiths Ch. 1, 2, 3 4

• Heisenberg Uncertainty (1.6)


Although we all know this important relation, we will review it briefly. In essence, it comes down to the
fact that if we have a nice wave with a well defined wavelength (momentum) we cannot define its position
very accurately. Conversely, if we have a pulse with a well definite position, we cannot define its
wavelength very accurately. Essentially results in a total uncertainty in measurement (due to the statistical
nature) given by:

Suggesting that when measuring position and velocity/momentum, there is a minimum possible error.

The Time-Independent SEQ (Griffiths Ch 2)


• Stationary States (2.1)
So know what we are a bit familiar with the wavefunction, lets discuss hwo to find it. To start with, we will
not worry about time dependence and determine simply a time-independent wavefunction which is
multiplied by some temporal variation (separation of variables):

We can then plug this into SEQ, following the typical procedure of separation of variables

Where are usual, both sides are dependent only upon one variable (x or t) such that they must be equal
to a constant. We may then split these equations, one for time, and one for space.

Of course the solution to the time varing equation is a simple exponential where the
integration constant can be absorbed into the wavefunction as a normalization factor. Thus, our task
devolves to simply finding the spatial dependence of the wave function based on the local environment
and boundary conditions.

Now, we have two notes:

1. Wave function are stationary states. If we determine expectation values, we do this by multiplying
the wavefunction by its complex conjugate, thus, the time dependence always cancels.
2. They are states of definite total energy and the energy is described by the Hamiltonian:
Griffiths Ch. 1, 2, 3 5

Which is simply a statement of the total energy (kinetic + potential) within the system. If we replace our
momentum with our momentum operator (called a canonical substitution), we can determine the
Hamiltonian operator:

And if you look carefully, we can the re-write our original space-dependent SEQ as:

In this sense, employing the Hamiltonian operator returns the total energy of the system.

3. More general solutions can be achieved by superimposing many of the individual solutions
(similar to a Fourier Series), using different amplitude and phase constants.

• Harmonic Oscillator (2.3)


Know that we know how to solve the SEQ and deal with the wave function, we will try an example.
Some other useful examples exist, including the infinite square well, the finite square well, and finite
barrier, but we will not cover these here as they are not as critical for this course. Please review them in
Ch 2 of the Griffiths Text if you are not comfortable with them.

One of the more important examples is the harmonic oscillator. This is used to model materials as well
as nonlinear processes and is fundamental in quantum optics. However, it also allows us to introduce
ladder operators (creation and annihilation operators) which are a neat and useful shorthand for some
of the math. So, lets dig in.

The motion of an oscillator is descried by Hooke’s law

with solution

1
The potential energy of the oscillation is given by: 𝑉𝑉(𝑥𝑥) = 𝑘𝑘𝑥𝑥 2 , which is a parabola of course. Although
2
there is no perfect harmonic oscillator, many oscillators near the minimum energy can be approximated
as a simple harmonic oscillator.

Now for us, our problem is to solve SEQ with a potential that is given by

producing
Griffiths Ch. 1, 2, 3 6

Now there are two ways to solve this problem, an algebraic (e.g. ladder operators) and an analytic
approach. If you are interested, you may look through the Griffiths text on the analytic approch, but we
will only cover the algebraic one here.

So, we can start by rewriting our SEQ in a more suggestive form

In this case we recognize that the terms in the brakets can potentially be factored, and this is our goal. If
we look at these roots we will find:

Now, a note is that the things we are dealing with here are operators, and factoring operators is tricky
because they don’t commute. Thus, we need to check some things to ensure that we actually factor
things correctly. So, lets try to find out what a-a+ is. Another note, always use a test function [f(x)] when
working with operators because its easy to mess up the math if you don’t. You can throw it away later
once you’ve don’t the algebra. Sort of like trying to simplify (4+4)*3^2 without numbers, use a test
function or trial first.

(where d(xf)/dx = s(df/dx) + f was used in the last step). If we then throw out the test function we see
that:

1
Note that is the very similar to our original equation, but with an extra factor of ħ𝜔𝜔. Thus we can write
2
our previous equation as:
Griffiths Ch. 1, 2, 3 7

1
We note that the order of a+ and a- matter. If you flip them, you will see that the extra ħ𝜔𝜔 term
2
changes sign so we can also write:

What we see then is that a solution of SEQ with a+ψ has energy E + ħ𝜔𝜔.

And in this sense, a solution of SEQ with a-ψ has energy E - ħ𝜔𝜔.

Thus, these operators are a great tool for determining solutions of higher and lower energies. If we can
find one solution (E) to get started, then we can use these operators to add/lower energy. Because of
this these operators are known as the creation (adding energy) and annihilation (removing energy)
operators, also sometimes called the raising and lowering operators.

Yet if you are astute you might suggest that a continued application of the annihilation operator would
result in a negative energy state. Well, this perhaps could be true, but what we are guaranteed is that
the outcome is a solution to SEQ, NOT that it is normalizable. Thus, there may be a mathematical
answer but it is not real.

We can though envision an state which has the lowest possible energy, such that a-ψ = 0.

This, becomes a relatively simple expression to solve (linear ODE) and we can simply integrate both sides
to find that:
1
which has energy 𝐸𝐸𝑜𝑜 = ħ𝜔𝜔
2

This then is the ground state, lowest energy. Starting from this state, we can then use the ladder
operators to raise/lower things and determine as many energy levels as we want. Neat! The only trick is
to properly normalize the wavefunctions that you find.

• Free Particle (2.4)


The next useful example for us is that of a free particle. This is fairly simple in that a free particle
experiences no potential field, so V(x) = 0. Thus, our SEQ becomes are fairly straightforward ODE to
solve.
Griffiths Ch. 1, 2, 3 8

With solutions:

and

Of course we remember that our wavefunction also has the typical time dependence we discussed
earlier. Such that it represents a sort of wave format.

Now, if we look carefully we will find that if we try to normalize this function we cannot (e.g. integrating
sin(x)2 over all space). Thus, what this means is that these separable solutions do not satisfy the time-
independent SEQ (e.g. they are not stationary). As such, we cannot have a free particle with a definite
energy. However, these solutions are still useful because we can cimbine them in other ways to produce
unique waveforms. For example, the general form of:

This function can in fact be normalized for the proper φ(x) term, but it doesn’t just carry a single k, it
caries many. Thus, we represents a wave packet. To solve for this, what we do is to take our general
equation and solve for φ(x) when t = 0. This is essentially a Fourier transform and the function can be
readily calculated by:

Now, another interesting feature is the velocity of the wave. Because it is varying in both x and t we can
determine some velocity in which the wave is moving and in what direction.Note, the shape doesn’t
change, it simply moves through space. Similar to how we determine the speed of electromagnetic
waves, we can find the speed of quantum wavefunction

But if we compare this to the velocity of a classical paprticle with the same energy we find:

This is due to the difference in phase velocity and group velocity of the wave packet. In fact what we are
seeing is that the group velocity (vquantum) is ½ of the phase velocity (vclassical). Recalling from EMF the
Griffiths Ch. 1, 2, 3 9

group velocity is given as 𝑑𝑑𝑑𝑑/𝑑𝑑𝑑𝑑 while the phase velocity is 𝜔𝜔/𝑘𝑘. IF we take our harmonic dispersion
relation of 𝜔𝜔 = ħ𝑘𝑘 2 /2𝑚𝑚, we see that the group velocity is exactly half of the phase velocity.

There are some additional concepts in Ch 2 on scattering states, bound states, and the scattering matrix
which may be of interest to you so I encourage you to look through these sections on your own, but for
brevity I will leave them out in this summary.

Formalism (Griffiths Ch 3)
1. Hilbert Space (3.1)
In the previous discussion we have stumbled upon a few interesting properties, such as the even energy
spacing of a harmonic oscillator and the uncertainty principle. What we look to do now is to provide
more coherent sense of what we have already discussed, adding a bit more mathematical rigor to the
descriptions.

Quantum theory is based on two constructs: wave functions and operators. The state of a system is
represented by its wavefunction, and observables are represented by operators. Mathematically,
wavefunctions satisfy the conditions to be described as vectors, and operators act as linear
transformations upon them. So, if we say it a bit differently, the language of quantum is linear algebra.

We will now build more description of our quantum ideas using linear algebra. If you have not taken
linear algebra in a while (or not at all) please review. There is a high level discussion at the end of
these notes for you to look through but you may want to refer to a good linear algebra textbook.

For a high-dimensionality space, the easiest way to represent a vector as a column matrix in a specified
orthonormal basis.
𝑎𝑎1
|𝛼𝛼⟩ → 𝒂𝒂 = � … �
𝑎𝑎𝑛𝑛
And the inner product of two vectors (generalizing the dot product into N dimensions and complex
numbers) is:

⟨𝛼𝛼|𝛽𝛽⟩ = 𝑎𝑎1∗ 𝑏𝑏1 + 𝑎𝑎2∗ 𝑏𝑏2 + ⋯ + 𝑎𝑎𝑛𝑛∗ 𝑏𝑏𝑛𝑛

And linear transformations, T, are represented by matrices (in a given basis) which act on vectors to
produce new vectors. The work by standard rules of matrix multiplication:
𝑡𝑡11 … 𝑡𝑡1𝑛𝑛 𝑎𝑎1
�𝛽𝛽⟩ = 𝑇𝑇��𝛼𝛼⟩ → 𝒃𝒃 = 𝑻𝑻𝒂𝒂 = � … … … ��…�
𝑡𝑡𝑛𝑛1 … 𝑡𝑡𝑛𝑛𝑛𝑛 𝑎𝑎𝑛𝑛
But the vectors we encounter in quantum are in general functions that live in infinite dimensional
spaces. However, the notion of functions as a matrix or vector is a bit awkward and should be carefully
considered.
Griffiths Ch. 1, 2, 3 10

The collection of all functions of x constitutes a vector space, but for our purposes it is much too large.
We need to impose some of the conditions we have discussed earlier, for example:

�|𝜓𝜓|2 𝑑𝑑𝑑𝑑 = 1

Moreover, functions which make up the wavefuction must all be square-integrable over a certain
interval

�|𝑓𝑓(𝑥𝑥)|2 𝑑𝑑𝑑𝑑 < ∞

These conditions constitute a much smaller space of possible vectors. Although there are many
examples, one which we are particularly interested in is the L2(a,b) space (mathematician language)
which is the space of functions that are square integrable with respect to a certain variable (e.g. x). The
collection of these functions is called a Hilbert Space (physics language).

Thus, our wavefunctions live inside of the Hilbert Space

note: a Hilbert Space is a general term and is not always equivalent to L2, BUT in quantum it the only one
we use, so the terms have become interchangeable. But, fundamentally, they are not necessarily equal.
You can see more in the appendix at the end of these notes).

In this case, we define the inner product of two functions f(x) and g(x) which are square integrable (e.g.
belonging to the Hilbert Space or L2 set).

⟨𝑓𝑓|𝑔𝑔⟩ = � 𝑓𝑓(𝑥𝑥)∗ 𝑔𝑔(𝑥𝑥)𝑑𝑑𝑑𝑑

The square integrable restriction ensure that this product is a finite number and can be proved through
the Schwarz inequality. Moreover, it can be readily verified that this statement verifies the three
properties of an inner product

⟨𝛽𝛽|𝛼𝛼⟩ = ⟨𝛼𝛼|𝛽𝛽⟩∗
⟨𝛼𝛼|𝛼𝛼⟩ ≥ 0, and ⟨𝛼𝛼|𝛼𝛼⟩ = 0 when |𝛼𝛼⟩ = 0

⟨𝛼𝛼|(𝑏𝑏|𝛽𝛽⟩ + 𝑐𝑐|𝛽𝛽⟩) = 𝑏𝑏⟨𝛼𝛼|𝛽𝛽⟩ +c⟨𝛼𝛼|𝛾𝛾⟩

A few definitions then:

• A function is said to be normalized if its inner product with itself is 1


• Two functions are said to be orthogonal if their inner product is 0
• A set of functions is orthonormal if they are normalized and also mutually orthogonal (e.g.
⟨𝑓𝑓𝑚𝑚 |𝑓𝑓𝑛𝑛 ⟩ = 𝛿𝛿𝑚𝑚𝑚𝑚 )
• A set of functions is complete if any other function (in the same space) can be expressed as a
linear combination of them:

𝑓𝑓(𝑥𝑥) = ∑ 𝑐𝑐𝑛𝑛 𝑓𝑓𝑛𝑛 (𝑥𝑥) with coefficients 𝑐𝑐𝑛𝑛 = ⟨𝑓𝑓𝑛𝑛 |𝑓𝑓⟩

2. Observables (3.2)
Griffiths Ch. 1, 2, 3 11

Hermitian Operators

The expectation value of an observable can be expressed very neatly in inner products:

〈𝑄𝑄〉 = � 𝜓𝜓 ∗ 𝑄𝑄� 𝜓𝜓 𝑑𝑑𝑑𝑑 = �𝜓𝜓�𝑄𝑄� 𝜓𝜓�

Now, the outcome of this of course ahs to be real (because it is an observable quantity) and to ensure
this we require that 〈𝑄𝑄〉 = 〈𝑄𝑄〉∗ . But, if you recall from linear algebra, the complex conjugate reverses
the order of the inner product, which is a problem for us because it means that the condition above
cannot be satisfied. Thus, our operators in quantum have a special property in that they can be
rearranged to solve this issue, specifically:

�𝑓𝑓�𝑄𝑄� 𝑔𝑔� = �𝑄𝑄�𝑓𝑓�𝑔𝑔�

For any f, g functions. Such an operator is called Hermitian.

In this sense, observables are represented by Hermitian operators

We can check this for the momentum operator.

𝑑𝑑𝑔𝑔 ∞ 𝑑𝑑𝑓𝑓 ∗
⟨𝑓𝑓|𝑝𝑝̂ 𝑔𝑔⟩ = � 𝑓𝑓 ∗ (−𝑖𝑖ħ) 𝑑𝑑𝑑𝑑 = (−𝑖𝑖ħ)𝑓𝑓 ∗ 𝑔𝑔| + � �−𝑖𝑖ħ � 𝑔𝑔 𝑑𝑑𝑑𝑑 = ⟨𝑝𝑝̂ 𝑓𝑓|𝑔𝑔⟩
𝑑𝑑𝑑𝑑 −∞ 𝑑𝑑𝑑𝑑
Which we get to using integration by parts. We note that the boundary term goes to zero because of our
restriction of the Hilbert Space (e.g. functions tend to zero at infinity).

A Hermitian conjugate (or adjoint), defined by the conjugate and transposed matrix, of an operator is
denoted by 𝑄𝑄� ϯ with the property:

�𝑓𝑓�𝑄𝑄�𝑔𝑔� = �𝑄𝑄� ϯ 𝑓𝑓�𝑔𝑔�

Thus, a Hermitian operator must be equal to its Hermitian conjugate to obey our rules we have laid out.

Determinant States

Ordinarily when we measure an observable Q on an ensemble of identical systems we do not get the
same result. But, there are some cases where the state does in fact always return the save value q. This
is called a determinant state, for the certain observable.

To have this be true, the standard deviation of Q must be zero:

𝜎𝜎 2 = 〈(𝑄𝑄 − 〈𝑄𝑄〉2 〉 = 0
To satisfy this,

𝑄𝑄� 𝑓𝑓(𝑥𝑥) = 𝑞𝑞𝑓𝑓(𝑥𝑥)


This is called the eigenvalue equation for the operator, where f(x) is the eigen function and q is the
corresponding eigenvalue.

Determinant states of Q are eigenfunctions of 𝑄𝑄�


Griffiths Ch. 1, 2, 3 12

Note that the eigenvalue is a number, and essentially states that if we multiply any eigenfunction by a
scalar, it is still an eigenfunction. The collection of eigenvalues is called the spectrum of the operator,
and they can be discrete, continuous, or a mixture.

As an example, SEQ is type of eigenvalue equation:


� 𝜓𝜓 = 𝐸𝐸ψ
𝐻𝐻

3. Eigenfunctions of a Hermitian Operator (3.3)


From this, you can see that our attention turns to determining eigenfunctions of Hermitian operators. As
mentioned, these functions can fall into two categories

• Discrete: where the eigenvalues are separated from each other – eigenfunctions are
normalizable
• Continuous: where the eigenvalues fill the entire range – eigenfunctions are not normalizable,
although combinations may be normalizable (recall our discussion on the free particle case).

Some operators only have discrete spectra (e.g. Hamiltonian of a harmonic oscillator) while some have
only continuous or a combination.

For discrete spectra, the normalizable eigenfunctions of a Hermitian operator have three properties

1. Eigenvalues of the eigenfunctions are real


2. Eigenfunctions belonging to distinct eigenvalues are orthogonal
3. For a finite space, the eigenfunctions span the space – they are complete

This is why the stationary states of the infinite well are orthogonal, but this also holds for any
observable. Last note that if we have degenerate eigenvalues, we can always construct a set of
eigenfunctions which are orthonormal. It can be tedious but it can be done.

For continuous spectra, the eigenfunctions are not normalizable. Yet, this doesn’t mean they are
useless. We can investigate this through an example:

Ex. Find the eigenfunctions and eigenvalues of the momentum operator over all space.

If we define fp(x) to be the eigenfunctions and p to be the eigenvalues, we have:


𝑖𝑖𝑖𝑖𝑖𝑖
𝑑𝑑
(−𝑖𝑖ħ) 𝑓𝑓 (𝑥𝑥)
𝑑𝑑𝑑𝑑 𝑝𝑝
= 𝑝𝑝𝑓𝑓𝑝𝑝 (𝑥𝑥) which has a general solution 𝑓𝑓𝑝𝑝 (𝑥𝑥) = 𝐴𝐴𝑒𝑒 ħ

This solution is not square integrable. Thus, the momentum operator has NO eigenfunctions that lie in
the Hilbert space. Yet, if we restrict the limits to only real eigenvalues, we see an interesting feature:
𝑖𝑖�𝑝𝑝−𝑝𝑝′ �𝑥𝑥
� 𝑓𝑓𝑝𝑝∗′ (𝑥𝑥)𝑓𝑓𝑝𝑝 (𝑥𝑥)𝑑𝑑𝑥𝑥 = |𝐴𝐴|2 � 𝑒𝑒 ħ 𝑑𝑑𝑑𝑑 = |𝐴𝐴|2 2𝜋𝜋ħ𝛿𝛿(𝑝𝑝 − 𝑝𝑝′ )

And if we pick the amplitude to cancel out the leading coefficients, then we can force the inner product
to be a delta function, recovering orthonormality. Moreover, the eigenfunctions (with real eigenvalues)
are complete, and any square integrable function can be written in the form:
Griffiths Ch. 1, 2, 3 13

1 𝑖𝑖𝑖𝑖𝑖𝑖
𝑓𝑓(𝑥𝑥) = � 𝑐𝑐(𝑝𝑝)𝑓𝑓𝑝𝑝 (𝑥𝑥)𝑑𝑑𝑝𝑝 = � 𝑐𝑐(𝑝𝑝)𝑒𝑒 ħ 𝑑𝑑𝑑𝑑
√2𝜋𝜋ħ
Which is essentially just the Fourier transform. Thus, the eigenfunctions can be used to construct states
in the Hilbert Space through the Fourier transform relation.
2𝜋𝜋ħ
Now , the eigenfunctions of momentum are sinusoidal and have momentum 𝑘𝑘 = which is the de
𝑝𝑝
Broglie formula. However, it is important to note that combined with the uncertainty principle, we know
that there is no such thing as a particle with an exact momentum, but we can imagine a normalizable
wave packet with a narrow range of momenta, which is essentially what this is trying to describew.

Another example can be seen by taking the position operator:

Ex. Find the eigenfunctions and eigenvalues of the position operator:

Let gy(x) be the eigen function and y the eigenvalue

𝑥𝑥� = 𝑔𝑔𝑦𝑦 (𝑥𝑥) = 𝑥𝑥𝑔𝑔𝑦𝑦 (𝑥𝑥) = 𝑦𝑦𝑔𝑔𝑦𝑦 (𝑥𝑥)

Here y is a fixed number for a given eigenfunction, but x is the continuous variable. Essentially, what
kind of function has the property that if I multiply it by x it is the same as multiplying it by a constant?
Well, it has to be a delta, zero for everywhere except where x = y

𝑔𝑔𝑦𝑦 (𝑥𝑥) = 𝐴𝐴𝛿𝛿(𝑥𝑥 − 𝑦𝑦)

This time the eigenvalue has to be real, and although the functions are not square integrable, they of
course are orthogonal, and if we pick the amplitude to be A = 1, the are orthonormal. They are also
complete, and we can use them to construction other functions.

𝑓𝑓(𝑥𝑥) = � 𝑐𝑐(𝑦𝑦)𝑔𝑔𝑦𝑦 (𝑥𝑥)𝑑𝑑𝑑𝑑 = � 𝑐𝑐(𝑦𝑦)𝛿𝛿(𝑥𝑥 − 𝑦𝑦)𝑑𝑑𝑦𝑦

4. Generalized Statistical Interpretation (3.4)


If you measure an observable Q(x,p) on a particle in a particular state ψ(x,t), you are centrain to get one
𝑑𝑑
of the eigenvalues of the Hermitian operator 𝑄𝑄� (𝑥𝑥, −𝑖𝑖ħ ).
𝑑𝑑𝑑𝑑

If the spectrum is discrete, the probability of getting a particular eigenvalue qn associated with the
eigenfunction fn is:

|𝑐𝑐𝑛𝑛 |2 where𝑐𝑐𝑛𝑛 = ⟨𝑓𝑓𝑛𝑛 |𝜓𝜓⟩

A similar condition is found for a continuous spectrum with real eigenvalues. Upon measurement, the
wavefunction collapses to the corresponding eigenstate.

Now, because the eigenfunctions of the observable operator are complete, we can write the
wavefunction as a linear combination of these eigenfunctions:

𝜓𝜓(𝑥𝑥, 𝑡𝑡) = ∑ 𝑐𝑐𝑛𝑛 (𝑡𝑡)𝑓𝑓𝑛𝑛 (𝑥𝑥) where 𝑐𝑐𝑛𝑛 (𝑡𝑡) = ⟨𝑓𝑓𝑛𝑛 |𝜓𝜓⟩
Griffiths Ch. 1, 2, 3 14

Thus, cn tells you “how much of a particular function fn is contained within the state ψ”, where |cn|2
represents the probability to determine eigenvalue of the particular eigenfunction fn contained within ψ.
From this we can prove, or rederive, all of the previous conditions we discussed in chapter 1,2.

5. The Uncertainty Principle (3.5)


Now, our original uncertainty principle can be recast using this more rigorous formalism as well. I will
leave this to the Griffiths text (section 3.5), but will take the main conclusion from this:
2
1
𝜎𝜎𝐴𝐴2 𝜎𝜎𝐵𝐵2 ≥ � 〈�𝐴𝐴̂, 𝐵𝐵��〉�
2𝑖𝑖
For the two operators 𝐴𝐴̂, 𝐵𝐵�. In essence, the uncertainty principle occurs because of the lack of
commutation between operators. If we plug in our position and momentum operators, we see that:
ħ 2
[𝑥𝑥�, 𝑝𝑝̂ ] = 𝑖𝑖ħ such that 𝜎𝜎𝑥𝑥2 𝜎𝜎𝑏𝑏2 ≥ � �
2

Which is the original Heisenberg uncertainty principle. In this sense, there is an uncertainty principle for
every observable whose operators do not commute.

You may also ask, how can we minimize this uncertainty? This can be accomplished by a gaussian wave
packet (see proof in Griffiths).

Additionally, because the uncertainty principle is a general theorem we can apply it to other operators.
For example, a useful case is the quantities of energy and time, as this is very important for photonics and
entangled-state generation.

So we know the position-momentum uncertainty principle ∆𝑥𝑥∆𝑝𝑝 ≥ ħ/2, which can be recast into
energy and time, wherein if we multiply by some velocity v, we find: 𝑐𝑐∆𝑥𝑥 ∗ 𝑐𝑐∆(𝑚𝑚𝑚𝑚) = 𝑡𝑡 ∗ 𝐸𝐸 ≥ ħ/2.
However, this may seem strange because the argument here is made from relativity, which as you may
know does not go well with quantum theories. Partly because in quantum, we do not treat time as a
dynamic variable, like it is in relativity. This causes some problems, because it is an independent
variable. Thus, it may seem mathematically straight forward, but we need to be more careful with our
thought process. Lets try to derive it a bit more rigorously.

If we want to measure how fast the system is changing, lets compute time derivative of a variable:

And we know

So,

But, because the Hamiltonian is Hermitian, we can rearrange:


Griffiths Ch. 1, 2, 3 15

This is interesting because it tells us that the rate of change of the expectation value is determined by
the commutator of the operator with the Hamiltonian. In essence, if the operator commutes with the
Hamiltonian, then the system is steady in time. Cool.

If we then let Q and H become generalized quantities then the uncertainty is:

If we let ∆𝐸𝐸 = 𝜎𝜎𝐻𝐻 , then

∆𝑡𝑡∆𝐸𝐸 ≥ ħ/2

Such that, we recover our energy time uncertainty. BUT, we note that what we mean by∆𝑡𝑡 is:

Or in other words, the time it takes the system to shift by a standard deviation. This time can vary, and
does, based on what kind of operation you are concerned with. Essentially, it is a necessary definition.
Yet we can uncover an interesting feature:

If ∆𝐸𝐸 is small, then the rate of change is very gradual (e.g. ∆𝑡𝑡 is very large). Conversely, energy changes
very quickly, then the uncertainty in the energy is large.

(think about this in terms of nonlinear optics. It becomes interesting and we will come back to this later
as a method to create entangled states in photons.)

6. Vectors and Operators (3.6)


Bases in the Hilbert Space

We describe vectors (states) in terms of their components along certain directions, but of courses the axes are
general and we are free to define other ones. Thus, these bases coordinates are important to note and we need to
understand how to transition between them as it can be very helpful to use a certain basis for certain problems (e.g.
integrating a sphere in spherical coordinates is easier than in cartesian).

In this sense, operators (representing observable variables) act as transformations on the Hilbert Space, turning the
vector into another one:

|𝛽𝛽⟩ = 𝑄𝑄� |𝛼𝛼⟩

Just as vectors are represented with respect to an orthonormal basis set {|𝑒𝑒𝑛𝑛 ⟩} through their components:

|𝛼𝛼⟩ = ∑ 𝑎𝑎𝑛𝑛 |𝑒𝑒𝑛𝑛 ⟩ |𝛽𝛽⟩ = ∑ 𝑏𝑏𝑛𝑛 |𝑒𝑒𝑛𝑛 ⟩ 𝑎𝑎𝑛𝑛 = ⟨𝑒𝑒𝑛𝑛 |𝛼𝛼⟩ 𝑏𝑏𝑛𝑛 = ⟨𝑒𝑒𝑛𝑛 |𝛽𝛽⟩

Operators are expressed in a particular basis by their matrix elements:


Griffiths Ch. 1, 2, 3 16

𝑄𝑄𝑚𝑚𝑚𝑚 ≡ �𝑒𝑒𝑚𝑚 �𝑄𝑄� �𝑒𝑒𝑛𝑛 �

With this we can represent transformations as:

� 𝑏𝑏𝑛𝑛 |𝑒𝑒𝑛𝑛 ⟩ = |𝛽𝛽⟩ = � 𝑎𝑎𝑛𝑛 𝑄𝑄� |𝑒𝑒𝑛𝑛 ⟩

If we take an inner product with |𝑒𝑒𝑚𝑚 ⟩ then we can substitute in our notation of the transform and show that:

𝑏𝑏𝑚𝑚 = � 𝑄𝑄𝑚𝑚𝑚𝑚 𝑎𝑎𝑛𝑛

Dirac Notation

Dirac proposed to chop the bracket notation for the inner product into two pieces which he called a bra
⟨𝑎𝑎| and a ket |𝑏𝑏⟩. The latter is a column vector but what about the former? It is actually a linear function
of vectors, which when interacting with other quantities like operators or vectors produces a quantities.
Thus, we can see a bra as an instruction to integrate, similar to:

⟨𝑓𝑓| = � 𝑓𝑓 ∗ [… ]𝑑𝑑𝑑𝑑

In a finite dimensional space, kets are columns containing the components of the vector in some basis
and the bras are rows containing the complex conjugates of the elements.

The ability to treat bras as separate quantities is interesting and useful. For example, if we take a
normalized vector |𝛼𝛼⟩, we can define:

𝑃𝑃� = |𝛼𝛼⟩⟨𝛼𝛼| < − 𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃 𝑂𝑂𝑂𝑂𝑂𝑂𝑂𝑂𝑂𝑂𝑂𝑂𝑂𝑂𝑂𝑂


Which essentially picks out the portion of the vector that lies along the direction of |𝛼𝛼⟩.

𝑃𝑃�|𝛽𝛽⟩ = (⟨𝛼𝛼|𝛽𝛽⟩)|𝛼𝛼⟩
Technically, the middle of the bra ket is a name of the vector in question, (e.g. in our examples above it
is α or β). It is not the actual vector. It is customary to name the vector after the function it represents,
and if we choose the Hilbert Space (L2), then we can write:

�𝑓𝑓�𝑄𝑄�𝑓𝑓� = �𝑄𝑄� 𝑓𝑓�𝑓𝑓�

Which in strict Dirac notation, the left side should actually be written

�𝑓𝑓�𝑄𝑄��𝑓𝑓�

Because here our operator cannot act direction upon the “name” f, it needs t be within a bra or ket to
make it a vector.

For the right side, the quantity �𝑄𝑄� 𝑓𝑓| measure the bra dual to 𝑄𝑄� |𝑓𝑓⟩ which is expressed as:

�(𝑛𝑛𝑛𝑛𝑛𝑛𝑛𝑛 𝑄𝑄� |𝑓𝑓⟩|

Yet, this is understood as the expression �𝑄𝑄�𝑓𝑓|.

Few properties to conclude:


Griffiths Ch. 1, 2, 3 17

The sum of two operators is defined through distribution:

�𝑄𝑄� + 𝑅𝑅� �|𝛼𝛼⟩ = 𝑄𝑄� |𝛼𝛼⟩ + 𝑅𝑅� |𝛼𝛼⟩

An the product of two operators is defined as:

𝑄𝑄� 𝑅𝑅� |𝛼𝛼⟩ = 𝑄𝑄� (𝑅𝑅�|𝛼𝛼⟩)


Where the order of course matters.
Griffiths Ch. 1, 2, 3 18

Review of Linear Algebra


Here we will cover Linear algebra focusing on 1) allowing scales to be complex and 2) allow vectors to be
in spaces of infinite dimensionality.

Basic Rules of Vectors

A vector space consists of a set of vectors ( |𝛼𝛼⟩, |𝛽𝛽⟩, |𝛾𝛾⟩, … ) and a set of scalars (a, b, c, …) which can
participate in a few operations:

1) Vector addition: |𝛼𝛼⟩, +|𝛽𝛽⟩ = |𝛾𝛾⟩


a. Which is both commutative and associative
b. There exists a |0⟩ such that |𝛼𝛼⟩ + |0⟩ = |𝛼𝛼⟩
c. For every vector there is an inverse vector |−𝛼𝛼⟩, which when added produces |0⟩
2) Scalar multiplication: a |𝛼𝛼⟩ = |𝛾𝛾⟩
a. Which is distributive with respect to vector addition and scalar addition, it is also
associative.
b. Multiplication of scalars has the effect you would expect, e.g. 0* |𝛼𝛼⟩ = 0 and a negative
1 can produce the inverse vector

Vectors can be linearly combined to create new vectors by multiplying constants by the individual
vectors and summing: 𝑎𝑎|𝛼𝛼⟩ + 𝑏𝑏|𝛽𝛽⟩ + 𝑐𝑐|𝛾𝛾⟩ + ⋯ and a given vector |𝜆𝜆⟩ is said to be linearly independent
of the set |𝛼𝛼⟩, |𝛽𝛽⟩, |𝛾𝛾⟩, … if it cannot be written as a linear combination of them (for example, a vector in
3-D cannot be written solely as a combination of the |𝑥𝑥⟩, |𝑦𝑦⟩ vectors, so it is linearly independent of
them. A collection of vectors is said to span the space if every vector in the space can be written as a
linear combination of the basis set. The number of vectors in any basis is called the dimension of the
space, x,y,z are 3 basis vectors so 3-dimension.

Let us pick a basis set: |𝑒𝑒1 ⟩, |𝑒𝑒2 ⟩, … |𝑒𝑒𝑛𝑛 ⟩ where we can write a vector |𝛼𝛼⟩ = 𝑎𝑎|𝑒𝑒1 ⟩ + 𝑏𝑏|𝑒𝑒2 ⟩ + ⋯ 𝑛𝑛|𝑒𝑒𝑛𝑛 ⟩. Of
course we can ignore the basis vectors and simply write it in terms of the scalar values as (a, b, …, n). IT
is often easier to work this way to perform algebraic operations. The only downside is that you must
commit yourself to a particular basis and your math would look different in a different basis set.

Inner Products

In 3-D we encounter two kinds of vector products, the dot product and the cross product. The cross
product does not generalize to n-dimensions, but the dot product does, and is also referred to as the
inner product. We write this as ⟨𝛼𝛼|𝛽𝛽⟩ and it has the following properties:

⟨𝛽𝛽|𝛼𝛼⟩ = ⟨𝛼𝛼|𝛽𝛽⟩∗
⟨𝛼𝛼|𝛼𝛼⟩ ≥ 0, and ⟨𝛼𝛼|𝛼𝛼⟩ = 0 when |𝛼𝛼⟩ = 0

⟨𝛼𝛼|(𝑏𝑏|𝛽𝛽⟩ + 𝑐𝑐|𝛽𝛽⟩) = 𝑏𝑏⟨𝛼𝛼|𝛽𝛽⟩ +c⟨𝛼𝛼|𝛾𝛾⟩

Because the inner product of a vector with itself is a nonnegative number, its square root is real and we
call this the norm of the vector: ‖𝛼𝛼‖ = �⟨𝛼𝛼|𝛼𝛼⟩, and it generalize the notion of “length” to n-dimensions.
A vector whose length is 1 is normalized. Two vectors which are orthogonal have an inner product equal
to zero (e.g. they do not have a mutual projection).
Griffiths Ch. 1, 2, 3 19

A collection of mutually orthogonal vectors is called an orthonormal set

�𝛼𝛼𝑖𝑖 �𝛼𝛼𝑗𝑗 � = 𝛿𝛿𝑖𝑖𝑖𝑖

It is always possible, and almost always proper, to select an orthonormal basis, so that the inner product
of a vector can be written neatly in terms of the components:

⟨𝛼𝛼|𝛽𝛽⟩ = 𝑎𝑎1∗ 𝑏𝑏1 + 𝑎𝑎2∗ 𝑏𝑏2 + ⋯ + 𝑎𝑎𝑛𝑛∗ 𝑏𝑏𝑛𝑛

Another geometric quantity we can generalize is the angle between two vectors. In ordinary vector
analysis, cos(𝜃𝜃) = (𝒂𝒂 ∙ 𝒃𝒃)/|𝒂𝒂||𝒃𝒃|, but when our vectors are complex this doesn’t work, but it is always
true that the absolute value of the quantity is a number no greater than 1,

|⟨𝛼𝛼|𝛽𝛽⟩| ≤ ⟨𝛼𝛼|𝛼𝛼⟩⟨𝛽𝛽|𝛽𝛽⟩ (Schwarz inequality)

In this case, we can define the able between the two vectors as:

⟨𝛼𝛼|𝛽𝛽⟩⟨𝛽𝛽|𝛼𝛼⟩
cos(𝜃𝜃) = �
⟨𝛼𝛼|𝛼𝛼⟩⟨𝛽𝛽|𝛽𝛽⟩

Linear Transformations

Linear transforms take a vector space and modify it to form another vector space, where in that:

𝑇𝑇�|𝛼𝛼⟩ = |𝛼𝛼′⟩
Provided that the transformation is linear.

If you know how a particular transformation affects the basis vectors, you can easily determine what it
does to any arbitrary vector.

If we have an arbitrary vector |𝛼𝛼⟩:

In essence, the transformation takes a vector with components a and turns it into a vector with
components a’.
Griffiths Ch. 1, 2, 3 20

In this case, 𝑇𝑇� represents an n x n matrix whose elements detail the transformation:

where

The addition of two linear transforms is then the sum of the two transformation matrices, while the
� (NOTE THE ORDER
� then 𝑺𝑺
product is then the net effect of performing them in succession, first 𝑻𝑻
MATTERS HERE AS THIS IS MATRIX MULTIPLICATION)

NOTE, BOLD MEANS A MATRIX

Ok now a few terms which everyone should be familiar with, but just for clarity:

Transpose: Mirror of the matrix about the major diagonal, denoted by 𝑻𝑻 � (not we sometime also use the
tilde for complex numbers, but the difference here is whether it is a quantity like εr or a vector/matrix)

Symmetric: square matrix and its transpose is equal to the original

Antisymmetric: square matrix with the transpose reverses the sign of the matrix

Conjugate: to take the conjugate of a matrix, we take the conjugate of each element, denoted by T*

Hermitian conjugate (or adjoint): is the transposed conjugate, denoted by 𝑇𝑇 †


Griffiths Ch. 1, 2, 3 21

Hermitian (or self-adjoint): if the matrix is square and is equal to its own Hermitian conjugate

𝑻𝑻† = 𝑻𝑻
Skew Hermitian (or anti-Hermitian): when the Hermitian conjugate of the matrix introduces a “-“ sign

𝑻𝑻† = −𝑻𝑻
Using this notation, the inner product of two vectors (with respect to an orthonormal basis

) can be rewritten n a more compact form:

⟨𝛼𝛼|𝛽𝛽⟩ = 𝒂𝒂† 𝒃𝒃

Matrix multiplication is in general not commutative (ST != TS), and the commutator is the difference
between these values:

[S,T] = ST - TS

The transpose of a produce of the transposes and Hermitian conjugates results in a reversal of the
order:
� = 𝑻𝑻
𝑺𝑺𝑺𝑺 � 𝑺𝑺� and (𝑺𝑺𝑺𝑺)† = 𝑻𝑻† 𝑺𝑺†

Inverse: matrix such that 𝑻𝑻−1 𝑻𝑻 = 𝑻𝑻𝑻𝑻−𝟏𝟏 = 𝟏𝟏, where 1 is the unit or identity matrix.

Singular: if a mtrix has no inverse

Unitary: if the inverse is equal to its Hermitian conjgate

Unitary matrices are special and ones which we will look to exploit. In fact, one property of Unitary
matrices is that if the original basis is orthonormal, the columns and rows of he matrix form an
orthonormal set.

Now the components of a given vector depend upon our choice of a basis, which we have so far more or
less ignored. Lets consider how these operations now affect our chosen basis. The components and
elements in the matrix which represent a linear transformation, depend upon our choice of basis. If we
perform a linear transformation on the basis set we can determine the old basis set |𝑒𝑒𝑖𝑖 ⟩ as linear
combinations of the new ones:
Griffiths Ch. 1, 2, 3 22

From our previous descriptions we then know how the components transform (where superscript
represents the basis states):

or in matrix form

Now what about the matrix which represents a linear transformation? How does it change due to a
change of basis? From our previous description, we can conclude that:

Where the prime denotes the modified original basis. If we multiply both sides by S-1, we can know that
ae = S-1af (from the equation above), such that:

Thus, we can see that:

What this means is that two matrices that are similar, defined by T2 = ST1S-1, essentially mean that the
perform the same transformation but in two different bases. Now, the interesting thing is that if our first
basis is orthonormal, and the matrix S is unitary, the output is a new orthonormal basis. Since it is the
most convenient thing to work in orthonormal bases, we seek unitary transformations. Another
interesting fact about similar matrices is that although the numbers may look different, the determinant
and trance are the same (this can be rigorously proven).

Eigenvectors and Eigenvalues

Lets consider a linear transformation in 3-D space, consisting of a ration about a specified axis by some
angle. Most vectors will change in a complex manner (they will ride on a cone about the axis), but some
vectors that happen to lie parallel to the axis simply state put (𝑇𝑇�|𝛼𝛼⟩ = |𝛼𝛼⟩). Moreover, if a vector lies on
the equatorial plane (e.g. perpendicular to the rotation axis) and the shift is π, the vector changes sign
(𝑇𝑇�|𝛼𝛼⟩ = −|𝛼𝛼⟩). These special kinds of vectors are called eigenvectors, and essentially are vectors which
when transformed produce simple multiples of the original vector:

𝑇𝑇�|𝛼𝛼⟩ = 𝜆𝜆|𝛼𝛼⟩
The complex number λ is called the eigenvalue, essentially representing the scaling factor introduced by
the transformation.

In matrix form we can write it as:

Ta = λa or (T-λ1)a=0
Griffiths Ch. 1, 2, 3 23

Now, if the matrix (T-λ1) had an inverse, we could simply multiple by the inverse and conclude that a =
0, but this cannot be true as this is a trivial solution. Thus, this (T-λ1) must in fact be singular, which
means that the determinant is zero:

This can be expanded into an equation to determine:

Where the coefficients C depend upon the matrix elements of T. This is called a characteristic equation
for the matrix (very similar to what you did in Diff. Eq. transforming linear ODEs into a characteristic
equation and solving In fact you found an eigenvalue – if you can recall this course!). Lets work a brief
example to see how it works

Ex: Lets consider the matrix:

We determine the characteristic equation as follows:

This has roots, 0, 1, and i. We can then determine an eigenvector with components (a1, a2, a3) as:

Which produces three equations that we need to solve. We will gain relationships between the
components, and we then solve by picking a value for one of the components. Note, that this works
because any multiple of an eigenvector is still an eigen vector. We then achieve a given vector with
definite components. We an repeat this value for the two other eigenvalues and find eigenvectors for
each. In this example, our eigenvectors are:
1 2 0
�0� 𝑓𝑓𝑓𝑓𝑓𝑓 𝜆𝜆 = 0 , �1 − 𝑖𝑖 � 𝑓𝑓𝑓𝑓𝑓𝑓 𝜆𝜆 = 1 , 𝑎𝑎𝑎𝑎𝑎𝑎 �1� 𝑓𝑓𝑓𝑓𝑓𝑓 𝜆𝜆 = 𝑖𝑖
1 1 0
If the eigenvectors that you find span the space, we can use them as a basis. In this basis, transformation
matrix takes a very very simple form, it is simply a diagonal matrix of the eigenvalues, and the
normalized eigenvectors are simply vectors:
Griffiths Ch. 1, 2, 3 24

The matrix that accomplishes this transformation is a similarity matrix, which can be found by using the
eigenvectors of the old basis as columns of S-1.

In our example, then we can have:


1 2 0
𝑺𝑺−1 = �0 1 − 𝑖𝑖 1�
1 1 0
which if you perform the similarity operation produces:
0 0 0
𝑺𝑺𝑺𝑺𝑺𝑺−1 = �0 1 0�
0 0 𝑖𝑖
Hermitian Transformations:

We have discussed a Hermitian matrix and a Hermitian conjugate, but what about a Hermitian
transformation? The definition of a Hermitian conjugate for a transformation matrix is conjugate matrix
(𝑇𝑇� ϯ ) that when applied to the first term of an inner product gives the same result as if original �𝑇𝑇�� was
applied to the second term:

In quantum, a fundamental role is played by Hermitian transformations, and they have three crucial
properties (see Griffiths for the proofs):

7. Eigenvalues of a Hermitian transformation are real


8. Eigenvectors of a Hermitian transformation belonging to distinct eigenvalues are orthogonal
9. The eigenvectors of a Hermitian transformation span the space
a. Note: even if there are degenerate roots from the characteristic equation, the
eigenvectors can be made orthogonal, so this statement is always true. In essence what
this means is that any Hermitian matrix can be diagonalized by a similarity
transformation with S being unitary. This in fact is a key point from which quantum
mechanics is built.

Hilbert Space

To construct the real number number system, mathematicians begin with integers and use them to
construction ratios of integers. The then show that these numbers are dense in the sense that if you pick
any two numbers you can always find one in between them. Yet it still have gaps because we can think
of series of real numbers which do not result in a rational number such as:
1 1 1
1 = + + ⋯± = ln2
2 3 𝑁𝑁
Griffiths Ch. 1, 2, 3 25

Thus, a first step is to limit our space to all converging sequences of rational numbers. For functions, we
can take the example:
𝑥𝑥 2 𝑥𝑥 𝑁𝑁
𝑓𝑓𝑁𝑁 (𝑥𝑥) = 1 + 𝑥𝑥 + + ⋯+
2! 𝑁𝑁!

Where N is finite, because of N is infinite, this sequence converges to ex, which is not bounded. Now, ex
is not a polynomial, but it is a sequence of a series of polynomials. Thus, to complete the space, we need
to include all of these functions which can be described by our chosen basis.

A complete inner product space is called a Hilbert space. In quantum, the L2 space is key, because this is
where wavefunctions live, and you will here the term Hilbert Space used, and it is intended to mean the
L2 square integrable space.
� and 𝑥𝑥� are of particular importance, and they take the
The eigen functions of the Hermitian operators 𝑖𝑖𝐷𝐷
form:

The set of eigenvalues for the given operator is called its spectrum, and these two operators are ones
with a continuous spectra. However, they unfortunately do not lie in the Hilbert space, and are not
square integrable! However, they are orthogonal in the sense that if we take two the inner product of
these with two different eigenvalues, we achieve:

We can sort of “normalize” these by selecting the leading coefficients of the functions Aλ and Bλ so that
the result is a just the delta function (which has an amplitude of 1). In this case we set:
1
𝑓𝑓𝜆𝜆 (𝑥𝑥) = 𝑒𝑒 −𝑖𝑖𝑖𝑖𝑖𝑖 such that �𝑓𝑓𝜆𝜆 �𝑓𝑓𝜇𝜇 � = 𝛿𝛿(𝜆𝜆 − 𝜇𝜇)
√2𝜋𝜋

𝑔𝑔𝜆𝜆 (𝑥𝑥) = 𝛿𝛿(𝑥𝑥 − 𝜆𝜆) such that ⟨𝑔𝑔𝜆𝜆 |𝑔𝑔⟩ = 𝛿𝛿(𝜆𝜆 − 𝜇𝜇)

Note: This “quasi normalization” should sound dubious. Its use was pioneered by Dirac, and is called
Dirac notation. He was exceedingly confident (although other mathematicians disputed it) that he could
get away with it because the functions lie in the sort of “suburbs” or normalizable functions. It was very
successful, and turns out to be quite powerful even though it doesn’t seem logical.

If we use these normalized eigenfunctions as bases for L2 the linear combination becomes an integral:

|𝑓𝑓⟩ = ∫ 𝑎𝑎𝜆𝜆 |𝑓𝑓𝜆𝜆 ⟩𝑑𝑑𝑑𝑑 and |𝑓𝑓⟩ = ∫ 𝑏𝑏|𝑔𝑔𝜆𝜆 ⟩𝑑𝑑𝜆𝜆

Again, this sounds strange, because these functions don’t actually lie inside of our L2 space. Again
sounds very strange (well, but this should be somewhat expected in quantum right!). The main point is
that these functions are complete, and this is the main thing we need.

If we take the inner product with |𝑓𝑓𝜇𝜇 �, and utilizing the orthonormality we obtain:
Griffiths Ch. 1, 2, 3 26

�𝑓𝑓𝜇𝜇 �𝑓𝑓� = � 𝑎𝑎𝜆𝜆 �𝑓𝑓𝜇𝜇 �𝑓𝑓𝜆𝜆 � 𝑑𝑑𝑑𝑑 = � 𝑎𝑎𝜆𝜆 𝛿𝛿(𝜆𝜆 − 𝜇𝜇)𝑑𝑑𝑑𝑑 = 𝑎𝑎𝜇𝜇

And
1
𝑎𝑎𝜆𝜆 = ⟨𝑓𝑓𝜆𝜆 |𝑓𝑓⟩ = ∫ 𝑒𝑒 −𝑖𝑖𝑖𝑖𝑖𝑖 𝑓𝑓(𝑥𝑥)𝑑𝑑𝑑𝑑 = ℱ(−𝝀𝝀)  Fourier transform
√2𝜋𝜋

What this tells us is that the -λ component of the vector |𝑓𝑓⟩ in the derivative basis IS the Fourier
Transform of the function f(x). Likewise,
1
𝑏𝑏𝜆𝜆 = ⟨𝑔𝑔𝜆𝜆 |𝑓𝑓⟩ = � 𝛿𝛿(𝑥𝑥 − 𝜆𝜆) 𝑓𝑓(𝑥𝑥)𝑑𝑑𝑑𝑑 = 𝑓𝑓(𝜆𝜆)
√2𝜋𝜋
So the λ component of the vector |𝑓𝑓⟩ in the position basis is f(λ).

Although we can no longer represent operators as matrices, we are still interested in values with the
form:

�𝑓𝑓𝜆𝜆 �𝑇𝑇��𝑓𝑓𝜇𝜇 �

Where the values of λ, µ are called the matrix element of the operator.

S-ar putea să vă placă și