Sunteți pe pagina 1din 26

MM-Theory - Quantum Mechanics http://www.mm-theory.com/qm/qm.

htm

Bookmark this page now!

Introduction The Basics Energy Quanta Wave/Particle Duality


Entanglement, Cats, and Other
Uncertainty and Randomness Interpretations Evaluations
Paradoxes

Quantum Mechanics
ABSTRACT: This paper constitutes a general overview of
quantum mechanics. It is divided into three sections. The first
section outlines, from a historical perspective, the major ideas and
experiments that contributed to the development of quantum
mechanics. The second section outlines the major interpretations
that, in accounting for the results of quantum mechanical
experiments, have made their way into the mainstream over the
years. The third section evaluates these interpretations in order to
assess their worth for further consideration, the goal being to
decide upon one as the formal position we will be taking in this
website.

Introduction
There are two papers in this website that the reader will not get through without a rough understanding of quantum
mechanics. These papers are Determinism and Free-Will and The Universe and "God". Therefore, I have included
this paper, which, it is hoped, will suffice for the necessary background the reader will need in order to proceed
with these papers. This paper is divided into three major sections in addition to this introduction. To start with, we
cover the basics of quantum mechanics (although "basics" in probably a misleading terms ), followed by the many
interpretations that have emerged over the years. Finally, we will sift through these interpretations, evaluating each
one on its strengths and weaknesses, the aim of which will be to settle on the one that proves the most inherently
consistent with the greatest degree of explanatory power.

Indeed, quantum mechanics does need interpretation. But so does every other science. However, this is especially
true of quantum mechanics because in no other field of science has there been such contrast between the raw data it
gathers and what that data means in terms of the state of our world. The reason for this is because, quite early after
its inception, the most intuitive interpretations scientists were faced with making made little to no sense at all. They
flew directly in the face of everything they had been trained to believe about the nature of our world - that is, against
everything classical mechanics stood for. For example, classical mechanics would never predict that a particle
could exist in more than one place at a time, yet this is what the data gathered by quantum mechanical experiments
seemed to entail. Classical mechanics firmly insists that two rigid bodies cannot penetrate each other, or at least that
if they do, it is only due to one having been given sufficient energy in order to do so, and that this energy is never
Classical given spontaneously or out of nothingness. Yet quantum mechanics features a phenomenon whereby a particle,
Mechanics without sufficient energy to do so, spontaneously penetrates a barrier, passing from one side to the other, without
even leaving a dent in the barrier. Such results seemed completely absurd, and so to accept them was questionable at
best. This is why the contrast between reading the raw data and crafting interpretations of that data became so
glaring. Whereas, in most scientific experiments, the most intuitive interpretation could be - more or less - treated as
fact without much dispute, the interpretations of quantum mechanical experiments simply couldn't. It felt more natural
to doubt the conclusions the data led to - so much so that various competing interpretations sprung forth, making it
even more evident that one had to interpret the data.

These interpretations could not be falsified so easily. The central problem in falsifying an interpretive account of
quantum mechanics is that what needs interpreting is what happens to the phenomena in question when it is not being
measured. That is, because, as we shall see, measurement affects the phenomena being investigated in quantum
mechanical studies - an essential lesson that falls out of quantum mechanics - the most intriguing question concerns

1 of 23 4/2/2010 3:46 PM
MM-Theory - Quantum Mechanics http://www.mm-theory.com/qm/qm.htm

what the states and nature of these phenomena are when they are not being measured - that is, when we aren't
observing them. Obviously, any answer one conjures up to this question cannot be determined scientifically since
science demands observation and measurement as the basis upon which answers can be drawn forth. Therefore, such
answers are never more than speculation and guesswork, and thus it becomes emphatic that we can do nothing more
than interpret the data.

There are many such interpretations today, but for the sake of brevity, we will only look at the few major ones taken
seriously by experts in the field. We will do this in the second part of this paper. First, however, let's understand the
basics of quantum mechanics such that we understand the data that these interpretation aim to account for. We will
avoid mathematics as much as possible (primarily because I don't understand it myself ), and stick to a
chronological description of the subject, touching on each of the major contributions to the field as they made their
mark in history.

The Basics
The world was not exposed to quantum mechanics over night. It was not presented as one whole theory as Darwin's
evolution theory or Einstein theory of special relativity was. All told, quantum mechanics was a body of
experimental work, theoretical insight, and mathematical development that evolved by the hands of numerous
thinkers over the course of almost thirty years. It began in 1901 with a simple idea and culminated in 1927 with the
formal doctrine of what we now call quantum mechanics. The evolution of quantum mechanics can be divided into
two major eras - the pre-war era and the post-war era. The pre-war era features Planck's energy quantization
hypothesis, Einstein's application of the latter to various problems in physics, and Bohr's revised model of the atom.
The post-war era features de Broglie's hypothesis that material particles travel as waves, Heisenberg's Uncertainty
Principle, the Davisson-Germer experiment, and Heisenberg and Bohr's overall interpretation of the above in what
they called the Copenhagen Interpretation. It is really the post-war era that set quantum mechanics apart from the rest
of physics, and in which we find principles of such counterintuitive caliber that it shakes the foundations of even a
layman's understanding of how the everyday world works. The pre-war era features some pretty revolutionary ideas
as well, but they weren't enough to be compartmentalized into a whole new discipline all its own. Nevertheless, the
complexities of these pre-war insights are plenty and go deep. In fact, they stem from several centuries of
accumulated knowledge that likewise go deep, and it would be difficult, if not impossible, to explain the pre-war
developments without briefly touching on these pre-twentieth century concepts. Therefore, we will have to attempt a
brief but thorough walkthrough of all the relevant physics as it was understood at the turn of the century and through
the following decade and a half. For some readers, this may be too much for such a brief overview, and for this
reason, I have supplied a list of links (below ) to some very good introductory websites for non-experts.
Nonetheless, if the reader feels confident in delving right into the subject matter, then let's focus on the pre-war era
first, beginning with the idea that started it all - Planck's energy quantization hypothesis.

http://freedocumentaries.net/media
/123/Uncertainty_Principle/
http://msc.phys.rug.nl/quantummechanics/
http://phys.educ.ksu.edu/
http://www.hi.is/~hj/QuantumMechanics/quantum.html
http://theory.uwinnipeg.ca/physics/quant/node1.html
http://hyperphysics.phy-astr.gsu.edu/hbase/hframe.html

Energy Quanta
The history of quantum mechanics begins with Max Planck, a physicist whose interest
lay in the phenomenon of black body radiation. This term refers to solid objects that
absorb all the electromagnetic radiation that falls upon them. Light is a kind of
electromagnetic radiation, as shown in figure 1, and so if a black body absorbs all the
Max Planck
radiation incident upon it, then it absorbs all light, and is therefore rendered completely
black - hence the name "black body". Although no radiation is reflected, black bodies do,
nevertheless, emit electromagnetic radiation. Before 1901, when Planck grabbed the
attention of the scientific community, physicists used a particular formula to calculate the
Black Body

2 of 23 4/2/2010 3:46 PM
MM-Theory - Quantum Mechanics http://www.mm-theory.com/qm/qm.htm

number of "modes" corresponding to a particular frequency of black body radiation.


What a mode is is not
important for this discussion.
In fact, for our purposes, we can use the word "energy"
in place of "modes" since the energy corresponding to a
Radiation specific frequency of radiation is proportional to the
number of modes. This formula was problematic, the
reason being that at high frequencies, it led to the
"ultraviolet catastrophe" as it was called. When the
Electromagnetic
frequency of radiation emitted by a black body is high
Radiation
enough (around the ultraviolet range and higher), the
amount of energy (or number of modes) this formula
yields is infinite. To physicists, this was clearly an
absurd result. It meant that all black bodies everywhere,
and even other objects that only approximate the
Ultraviolet description of black bodies, were emitting infinite
Catastrophe energy, and we should all be doused with it (and
consequently singed to death). Physicists longed for a
solution to this problem, and when Planck came along,
he proved to be just what the doctor ordered. What he
proposed was that for a given frequency, there is a minimum amount of energy that can correspond to that frequency,
Plank's
and any other quantity of energy can only come in integer multiples of that minimum. For example, if we represent the
Constant
frequency by f, and multiply it by h=6.626×10−34J·s (known as Planck's constant), the energy carried by a wave of
electromagnetic radiation can be E=hf, E=2hf, E=3hf... but never E=½hf, E=1½hf, or E=0hf. This theory was the key
to resolving the ultraviolet catastrophe because it meant that the formula needed revising in such a way that it no
longer computed an infinite amount of energy for high frequencies of radiation.

Figure 1: The electromagnetic spectrum

The quantization of energy was not initially intended to revolutionize physics, but scientists soon realized that the
implications this subtle move had for physics in general was momentous. In fact, Planck himself was doubtful that the
quantization of energy had any significant meaning beyond a mathematical formality - that is, he
considered his solution to the ultraviolet catastrophe "fudging the math" in order to make the
formula fit the data. It was Albert Einstein who saw the real potential in the idea of energy
quanta to solve various conceptual problems that had been haunting physicists for a while. He
proposed, in 1905, that the reason
why the new formula for the modes
of black body radiation worked so
well was because the radiation
Albert Einstein emitted by black bodies was
actually composed of particles of
energy, the fundamental quanta of energy that Planck
hypothesized only as a mathematical accommodation.
Later dubbed "photons" by Gilbert Lewis, these
Photons particles now made energy seem very much like matter
in that it could not be divided indefinitely - that is, just as
matter can be repeatedly divided until one reaches the
fundamental and indivisible particles that compose it, so
it is with energy. Energy was no longer seen as the
smooth and continuous thing that classical physics had
assumed. One couldn't just have any arbitrary amount.

3 of 23 4/2/2010 3:46 PM
MM-Theory - Quantum Mechanics http://www.mm-theory.com/qm/qm.htm

One could only have multiples of hf - the basic amount carried by the photon.

Einstein also proposed that the quantization of energy be


used to resolve another problem physicists had been
grappling with: the photoelectric effect. The
photoelectric effect is what they call the immediate
ejection of electrons from certain metals when radiation
of high enough frequency is incident upon it. The
electrons are ejected with more energy at higher
frequencies than at lower frequencies, with no ejection at
all below a certain frequency specific to the metal. This
makes sense to us today because we know that higher
frequency radiation carries higher energy, but this was
not assumed to be the case before the twentieth century.
It was assumed that frequency had nothing to do with
energy. In fact, it was assumed that the electrons in the
The metal would need time to build up the amount of energy
Photoelectric required to break their bonds to their nuclei. The
Effect radiation incident upon them would have to provide them
with energy gradually and steadily. One could shorten
this time by increasing the intensity of the radiation. The
increase in intensity would increase the rate of energy
provided because there would be more energy per
individual wave. It was puzzling, therefore, that this
should not happen in practice. In practice, when one
directs radiation onto the metal, electrons are ejected
instantly (no delay for buildup) and furthermore, no electrons are ejected period (no matter how much time passes) if
the frequency is below a critical level. The intensity still has an effect, however, but it is only that the number of
electrons ejected will increase in proportion to the intensity, not the amount of energy with which they are ejected.

By imagining radiation composed of particles whose energy is proportional to their frequency, Einstein solved the
mystery of the photoelectric effect as follows. Because a photon carries a discrete amount of energy, when it collides
with an electron, all that energy is imparted to the electron. It is
either enough to break the electron's bond from its nucleus or it is
not. The greater the amount of energy carried by that photon, the
more energy with which it will be able to launch the electron out
from the metal. When this energy is too low, however, it will not
be able to tear the electron away from its nucleus at all, accounting
for the total lack of ejected electrons when the frequency is below
a critical level. The immediacy of electron ejections at higher
frequencies is also accounted for. They are ejected immediately
because the energy carried by the photon is imparted to the
electron all at once, not over an extended period of time. The
greater quantity of electrons ejected with the increase in intensity
is account for by the fact that an increase in intensity corresponds
to an increase in the number of photons constituting the incident
radiation. With more photons, the likelihood of an electron being
hit by one increases, and so we see electrons being ejected more
frequently.

Another problem the quantization of energy came to bear upon was that of atomic line spectra. When certain
elements are heated to the point where they emit light and this light is channeled through a prism, strange strips of
color are produced in patterns unique to the element in question. This rarely occurs with other light sources, such as
fire or light bulbs. If a prism is held up to a light bulb, for example, the light
that passes through it will be refracted such that a rainbow-like pattern will
be generated. The reason why this happens is because the light that enters
the prism is a mix of different frequencies, which includes the range of
visible frequencies (red to purple). When mixed together, the range of
frequencies corresponding to color appears white. Different frequencies
Atomic Line refract at different angles, and so when a prism refracts white light, each
Spectra color refracts at a different angle. This results in the white light "splitting"
into its constituent colors. In effect, what a prism does is it separates the
different colors of light such that a rainbow-like pattern is produced when,
after leaving the prism, it falls upon a surface like a wall or a screen (in fact, this is exactly how rainbows are
produced - sunlight is refracted as it passes through prism-like raindrops). The phenomenon of atomic line spectra is
an example of this process, except that the bands of color are not as smooth and continuous. Rather, it is as if each
element chooses just a few extremely narrow strips from the entire range. That is to say, only a few very discrete

4 of 23 4/2/2010 3:46 PM
MM-Theory - Quantum Mechanics http://www.mm-theory.com/qm/qm.htm

frequencies (colors) appear when light from these elements, when heated, are put through a prism. Figure 2, for
example, shows the atomic line spectra for hydrogen, helium, and oxygen.

Figure 2: The atomic line spectra for hydrogen, helium, and oxygen.

For the longest time, scientists couldn't understand why the light from these elements was
refracted in such discrete strips, and so uniquely for each element. Planck's quantization
hypothesis along with Einstein's proposal that electrons absorb and emit energy as photons
containing specific energy amounts offered a plausible explanation for this, and in 1913, Neils
Bohr seized the opportunity to propose it. He suggested that these strips come about by
electrons in the elements
relinquishing their energy,
which occurs more
readily the more the
element is heated, only in
a small set of discrete amounts, and these
amounts are emitted as whole photons. Because
Niels Bohr the amount of energy carried by a photon
corresponds to a specific frequency of
electromagnetic radiation, these discrete
amounts correspond only to a finite set of
specific frequencies. Thus, the specific strips
we get represent the amounts of energy that the element in question can relinquish as individual photons. These
energy amounts determine the color and position of the strips. They do so by determining the frequency of the emitted
radiation, and as we have seen, the frequency determines the color and the angle of refraction, and thus its position
on the spectrum.

Bohr's theory was actually more than just an idea on electrons


emitting photons. In a metaphorical sense, he saved the atom. He
saved it by replacing the older Rutherford model, which had its
share of problems, with his own model. The major problem the
Rutherford model suffered was that, if it was true, the atom
shouldn't exist. The Rutherford model depicts atoms as a tight
cluster of protons at the center (neutrons hadn't been discovered at
the time) with the electrons orbiting this cluster a certain distance
away (like the planets around the Sun). The problem with this
The Rutherford
model is that it predicts that the atom should collapse in a fraction
Model
of a second after it is created. This is because the electrons should
be constantly radiating energy, thereby losing the energy required
to keep them in their orbits. Consequently, they should crash into
the nucleus, effectively ending the life of the atom. What Bohr
postulated was that electrons only lose energy by way of radiation
when they drop
from a higher
energy level to a lower one (explained below ). This drop is

5 of 23 4/2/2010 3:46 PM
MM-Theory - Quantum Mechanics http://www.mm-theory.com/qm/qm.htm

accompanied by the emission of a photon carrying the energy


difference between the two levels, and in accordance with Plank's
hypothesis, this energy corresponds to a specific frequency, and
this is the frequency of the emitted photon. Likewise, electrons can
jump to a higher energy level by absorbing photons. But when the
electron is not jumping between energy levels, Bohr says, it
radiates no energy. Furthermore, to remain perfectly consistent
with Planck, there must be a minimum energy level. Planck's
hypothesis says that no particle can have zero energy (E=0hf,
remember, is not an option). This is the key to saving the atom
from the Rutherford model. Bohr tells us that electrons don't
constantly radiate energy - only when they drop energy levels -
and that electrons can't drop below a minimum level. Being at this
minimum level allows the electrons to remain in orbit around the
nucleus, and thus the atom's life is preserved.

But what is an "energy level"? Bohr explains this with his concept of "orbitals". Unlike in the Rutherford model,
electrons in the Bohr model can't orbit their nucleus at any arbitrary distance. To deviate from their orbit even
slightly, moving either away from the nucleus or towards, would mean either acquiring the energy to do so or losing
it. But according to the Bohr model, electrons only acquire or lose energy by full quantum amounts, and these
amounts must correspond to the allowed distances from which the electron can orbit the nucleus. In other words, an
electron cannot orbit its nucleus anywhere between these discrete distances. The few orbits made possible by this
restriction Bohr called "orbitals". Because electrons need a specific
amount of energy to be in a particular orbital, each orbital corresponds
Atomic to a specific "energy level" that the electron is said to be at. Dropping
Orbitals or rising to lower or higher energy levels is essentially equivalent to
dropping or rising to lower or higher orbitals. The difference in energy
between orbitals depends, not only on how high up the orbitals in
question are, but on the idiosyncrasies of the atom. For example, the
number of protons and electrons belonging to the atom in question has
Yet Another an effect on the energy difference between orbitals. Also, the electrons
Model have effects on each other, and this affects the amounts of energy they
can emit or absorb, which in turn determines the energy levels they are
capable of acquiring. All these factors make for a unique atomic
signature, and this explains, not only the presence of atomic line
spectra, but also their uniqueness for each and every element.

The reader can see how useful the quantization of energy really was to the scientific community. It is rare that a
scientific hypothesis like Planck's bears so many remedies. It therefore earned the esteem of physicists the world
over, and a major shift in how they came to see nature took place - nature is quantized. But this shift wasn't without
problems of its own, and we will now take a look at the major difficulty physicists had to grapple with if they were
to accept this shift in perspective wholeheartedly.

Wave/Particle Duality
Quantum theory, as it eventually came to be known, although widely regarded as a revolutionary idea, did not
formally split physics into two mutually exclusive camps - namely, what we now call classical and quantum
mechanics. Despite opening the scientific community up to a new understanding of the nature of light, what really
made quantum theory mind bogglingly strange was what they discovered after doing in-depth experiments on this
fundamental particle of energy. But these experiments weren't conducted until after the war, and in my opinion,
contributed to the zany character of the turbulent twenties. Between 1905 and the twenties, however, not much further
development on Planck's quantum theory unfolded (with the exception of Bohr's atomic model in 1913 of course).
During this time, the new corpuscular model of light, along with the solutions it afforded the ultraviolet catastrophe,
the photoelectric effect, atomic line spectra, and other such enigmas, was still conceivable or intelligible - that is, it
could still be imagined - and technically didn't violate the central tenets of classical physics (or physics as
understood up until that time). One could still visualize light traveling as a stream of particles, as well as the inner
workings of the photoelectric effect and atomic line spectra. But before the twenties were over, scientists were
beginning to realize that the results yielded by quantum mechanical experiments were pointing in the direction of the
unimaginable. The most plausible interpretations were extremely difficult, if not impossible, to visualize. It was this
realization that truly prompted the schism between classical and quantum physics - it was only at the end of the
twenties that scientists realized that they had embarked on a whole new discipline of science that had never been
dreamt before.

6 of 23 4/2/2010 3:46 PM
MM-Theory - Quantum Mechanics http://www.mm-theory.com/qm/qm.htm

There is one minor exception to this - an exception to the conceivability of quantum theory before the nineteen
twenties. The new corpuscular model of light couldn't just be accepted without giving any thought to the evidence
supporting the more familiar wave model. This was not the first time there had been ambiguity concerning the wave
and corpuscular theory of light, but this question seemed to have been answered a long time before Planck proposed
his quantum hypothesis. The wave model of light had been settled by experiments designed to test for those
properties commonly associated with waves such as reflection, refraction, diffraction, and so on. The most notable
one was the double-slit experiment. The double-slit experiment is setup so that a light source of some kind is aimed
at a wall with two small, parallel slits in it that are usually less than an inch apart. A certain distance on the other
side of the wall is a screen. When the light source is turned on, the
light passes through the two slits and illuminates the screen on the
other side. But this illumination does not take the form of two
parallel lines similar to the slits. Instead, we get several parallel
bands of light (5 to 10 or thereabouts) that are brightest at the
center and steadily darken as one moves to the perimeter. Where
do these extra
bands come
from? The
presence of
these bands can
be understood
The Double-Slit by considering
Experiment the wave nature of light and what that entails about its passage
through the two slits. It entails that the phenomenon known as
diffraction will occur. Diffraction is what happens to a wave of
any kind (water, air, light, etc.) when it passes through a narrow
channel like the two slits. Upon coming out the other side, the
waves disperse in a variety of directions. They do not continue
solely on the straight path they may have entered the channel;
rather, they spread out in a circular or radiant fashion. Because
this occurs to the waves passing through both slits, they end up
interfering with each other. That is, the crests and troughs of each
wave coming from one slit end up overlapping with the crests and
troughs of waves coming from the other slit. This results in the amplitude of the waves doubling at those interference
points. When the amplitude doubles, the light becomes brighter. What we see, in effect, when we look at the series of
light bands on the screen are the interference points that happen to line the screen. That is, they are points where
waves from one slit cross the waves from the other slit, making the light at those points brighter than other areas
along the screen. We call this an "interference pattern". It is taken as a very strong sign that the phenomenon under
study is a wave phenomenon.

Other experiments like this, experiments that tested for other wave properties of light, gave very strong credence to
the wave model of light, and for the better half of the 19th century, scientists were convinced that light and other
forms of electromagnetic radiation were waves. But after Planck's and Einstein's corpuscular picture of light became
widely accepted, it didn't take much to return to the old question about whether light was a wave, which explains the
results of the aforementioned experiments quite well, or a stream of particles, which explains other phenomena such
as black body radiation, the photoelectric effect, and atomic line spectra. Something was obviously wrong, or at
least incomplete, about the picture thus far developed. It was difficult to imagine the true form light and
electromagnetic radiation must take. The best they could do was describe it in terms of the properties it exhibited.
They would say that at times, light behaves like a wave, but at other times, it behaves like a stream of particles. It
exhibits wave properties but also particle properties. They called this quirk "wave/particle duality". To imagine
such a thing bearing the properties of only one unified phenomenon - the true form light takes - rather than two -
waves and particles - proved quite difficult indeed. It seemed as though the quantization of energy was already
taking us into the world of the unimaginable, yet there was no persuasive reason to believe that the unimaginable
character of wave/particle duality wouldn't resolve itself eventually whereby someone would present a much clearer
model that fit both the properties of waves and those of particles in a fully intelligible manner. Until then, physicists
were content to believe that the true form of light had yet to be unraveled rather than that it couldn't be. By the end of
the twenties, however, this attitude shifted to the latter.

The first major step towards this end was a hypothesis put forward in 1923 by Louis de
Broglie, thereafter known as the de Broglie hypothesis. He proposed that if
electromagnetic waves are composed of fundamental particles called photons, then perhaps
The de Broglie
everything composed of fundamental particles travels as waves. An experiment conducted
Hypothesis
in 1927 by Clinton Davisson and Lester Germer, known as the Davisson-Germer
experiment, confirmed the de Broglie hypothesis. The Davisson-Germer experiment is
essentially a double-slit experiment conducted on particles of matter rather than light. For
example, it is possible to setup an electron gun to fire a stream of electrons at the slitted
The Davisson- wall and replace the screen with an electrosensitive plate. The electrosensitive plate is
Germer marked wherever an electron collides with it, and so it is possible to see where the

7 of 23 4/2/2010 3:46 PM
MM-Theory - Quantum Mechanics http://www.mm-theory.com/qm/qm.htm

electrons are streaming. This experiment shows that a stream of electrons will produce the
same interference pattern as that seen when light is used instead. It gets even stranger when
the gun is setup to fire only one electron at a time with lengthy rest periods
Experiment in between. The same interference pattern shows up. This is indeed strange
because it implies that a single electron will travel in the form of a wave
and pass through both slits at the same time. Physicists call this
phenomenon "superposition" - as in having more than a singular position.
Superposition Furthermore, as a wave, it will interfere with itself, amplifying its own
crests and troughs at key points, thus creating the interference pattern. The
Davisson-Germer experiment has been done with a whole slew of material
particles, including whole atoms (see sidenote ), and they all exhibit the
same interference pattern. In other words, de Broglie knew that all matter, at
Buckyballs! least as individual particles (and sometimes atoms), travels as waves, and
Davisson and Germer proved it experimentally.

Of course, findings like these fly directly in the face of, not only the expectations of experts in the field, but the basic
intuition of average laymen about the way the physical world works. If matter travels as waves in these experiments,
why don't we experience matter that way in everyday life? Why is it that when a pitcher throws a baseball, the
baseball doesn't end up diffusing itself in the form of a wave? Well, the kinds of experiments
conducted on material particles were never used on large objects like baseballs, so the
notion that material things traveled as waves could only be said about individual particles
(and sometimes atoms). However, when physicists put their heads together to come up with
some plausible interpretations of what was going on in these experiments, one possibility
they agreed upon was that all material objects, no matter how large, travel in waves, except
that the larger the object, the more difficult it is to notice its wave-like properties. In other
words, a baseball does diffuse itself in the form of a wave, but unlike in the double-slit
experiment wherein the wave spans several times the width of the particle in its point-like
form, the wave of the baseball spans very little beyond the width of the baseball in its
solid/spherical form. Consequently, we only see the baseball traveling along the single (and
virtual) path that leads it to the batter.

But before physicists could come to any consensus like this, they had to clarify exactly what constituted these waves.
That is, they had to answer the question of what it meant for a material particle, and an energy particle for that
matter, to take the form of a wave. Was the single electron in the double-slit experiment literally passing through
both slits at the same time, or did it, in becoming a wave, take a different form, like a mechanical wave, such that it
was only different points along the crest of this wave that passed through the two slits? And how is it that, in
experiments like the double-slit one, material particles traveled as a wave that could span, at least, the two slits and
the area on the screen covered by the interference pattern, yet stick to the confined space local to the nucleus of an
atom, traveling as a planet orbits its sun. After all, if it's possible for things the size of whole atoms to travel as
waves, what prevents them from dispersing themselves in all directions when they serve as the building blocks of
macroscopic objects?

To complicate the matter, other experiments conducted in the nineteen twenties revealed another dumbfounding quirk
about the way material particles work, perhaps the most dumbfounding quirk of quantum mechanics - even physics in
general. It was the discovery of randomness in nature, or at least what appeared to be randomness. As it turned out,
however, this discovery shed just the right light on the question of the form material particles took when they
traveled as waves - which would carry over to energy particles as well - and also the question of what suppresses
this form when they are bound to each other.

Uncertainty and Randomness


To understand the discovery of randomness, we first
need to understand the Heisenberg Uncertainty
Principle. In 1925 Werner Heisenberg, along with
The Heisenberg
his collaborator Neils Bohr, invented a mathematical
Uncertainty
system that described the fundamental workings of
Principle
particle behavior, energy and material alike, and how
they interacted. Heisenberg called his system "matrix
mechanics". At around the same time, another very
similar mathematical system describing exactly the
same phenomena, but in a different way, was being
Werner developed by Erwin Schrödinger. He called his
Heisenberg "wave mechanics". For a time, there was some dispute
between Heisenberg and Schrödinger over whose system was superior, but in the

8 of 23 4/2/2010 3:46 PM
MM-Theory - Quantum Mechanics http://www.mm-theory.com/qm/qm.htm

end, Schrödinger proved that both systems produced the same results, and were thus equally valuable (but, of course,
he still maintained that his was better ). Together, they heralded the new system under the banner "quantum
mechanics" - the first official usage of that title. One of the most well known implications that come out of this
mathematical system, particularly out of Heisenberg's version, is the relation between a particle's position and its
momentum. Insofar as the measurement of these two properties is concerned, the relation is that the more precision
with which one measures a particle's position, the less precisely one can know its momentum, and visa-versa. This
became known as the Heisenberg Uncertainty Principle (or HUP for short). Now, although the mathematics
supporting the Heisenberg Uncertainty Principle cannot be denied, the reasons why it holds in a physical or
conceptual sense can be ambiguous. When Heisenberg first attempted an explanation in 1927, what he offered veered
Erwin very little from a classical account. That is, the two distinguishing features that really set quantum mechanics apart
Schrödinger from other sciences - randomness and superposition of particles - played no part in Heisenberg's initial articulation
of the principle. It did feature energy quantization (or photons) as well as the de Broglie hypothesis, so in a quite
technical sense, the principle, as Heisenberg first put it, is definitely "quantum". But since it stands more as a
statement about what we can know about a particle's position and momentum, it contrasts sharply with later
formulations of the principle, formulations that show the uncertainty in these variables to be inherently
indeterminable.

As an epistemic principle - that is, in regards to what we can know - Heisenberg put it thus: when we want to
measure the position of a single particle, say an electron, we fire a photon at it and time how long it takes to return
after reflecting off the electron. We know exactly how fast a photon travels - 300,000 km/s or the speed of light - and
so by dividing the time by two, we can calculate the distance it traveled. Taking a page out of Planck and de Broglie,
Heisenberg knew that energy particles and material particles are equivalent, which meant that photons could actually
"hit" other particles like a baseball. Therefore, bouncing a photon
off the electron could affect the electron's momentum quite
significantly. Therefore, for every measure of the electron's
position, you change its momentum. You change it to something
you cannot deduce in that moment. Thus, the more precisely one
measures a particle's position, the less precisely one can
simultaneously measure its momentum. The flip side of that coin is
that it is possible to reduce the impact the photon has on the
electron's momentum by increasing the photon's wavelength - we
must remember that the photon is still a wave-like entity. A photon
with long wavelength carries less energy, and therefore knocks the
electron with less force. So by firing a generously long
wavelength photon at the electron, one can measure its position
without affecting the electron's momentum much. There is a catch,
of course - by increasing the photon's wavelength, one
compromises the accuracy with which the electron's position can
be read. This is because, at larger wavelengths, the photon tends
to diffract rather than reflect - that is, it doesn't bounce back at
quite the same angle. This can degrade the position readings quite significantly. How, then, does one gather an
accurate reading of momentum from this? What we have to understand about the way particle physics is conducted is
that momentum for particles is not deduced in the same way as it is for macroscopic objects. For a macroscopic
object, one can deduce its momentum by taking two position readings and calculating the difference. Then, by
dividing this difference by the time between readings, one can derive the velocity. Momentum is simply the velocity
times the mass, and so if one knows the mass of the macroscopic object, one can arrive at the momentum easily. But
with subatomic particles the way this is done is different. Experimenters deduce momentum by the shift in
wavelength of the photon upon its return. A shift in wavelength is equivalent to a shift in the energy it carries, and
because momentum and energy must be conserved, one can deduce the particle's momentum based on this wavelength
shift. This method of deduction works reliably enough to get a highly precise measurement of a particle's momentum,
but only at the cost of the precision with which we can measure its position. Therefore, the more precisely one
measures a particle's momentum, the less precisely one can simultaneously measure its position.

By the sounds of it, Heisenberg is suggesting that the problem of uncertainty is an issue with our methods of
measurement. It leads one to wonder whether we ought to be doing something other than bouncing photons off
particles, something by which the momentum and position can be measured with equally high precision. But
Heisenberg insisted that this principle was unconditionally valid - the approach to measurement didn't matter. All
measurement, he argued, is a matter of particles physically interacting. All measurement involves an instrument for
measuring and a phenomenon - always physical by Heisenberg's standards - to be measured. In order to put out a
reading, the phenomenon to be measured must affect the measuring instrument - specifically, by making contact with
it - which in turn means that the measuring instrument must affect the phenomenon to be measured (for every action,
there is an equal and opposite reaction - Newton's third law). All physical phenomena are made of particles,
including all measuring instruments like our photon, and so the interaction between a measuring instrument and the
phenomenon it measures constitutes an interaction between particles - namely, by bumping into each other. It
follows, therefore, that all measurements count as particles bumping into each other, and therefore changing each
other's momentum.

9 of 23 4/2/2010 3:46 PM
MM-Theory - Quantum Mechanics http://www.mm-theory.com/qm/qm.htm

Even so, there is nothing in this formulation that makes reference to terms like "superposition", "randomness" or
anything equivalent. One can comprehend Heisenberg with nothing but classical concepts. The quantization of energy
was a relatively new theory in Heisenberg's time, and the diffraction of long wavelength photons was also known,
but new theories abound all the time, whether in classical mechanics or any other branch of science, without
upsetting the groundwork upon which they stand. Soon enough, however, the Heisenberg Uncertainty Principle
gained a much more in-depth perspective that did incorporate concepts like superposition and randomness,
preserving it as one of the cornerstones of quantum mechanics as it came into its own. What really drove the point of
uncertainty home was the Davisson-Germer experiments that, not only confirmed the de Broglie hypothesis in 1927,
but demonstrated very convincingly that nature indeed has the capacity for randomness - and not just in the epistemic
sense.

The Davisson-Germer experiment, as you will recall, is a double-slit experiment. The inference they drew from
seeing the interference pattern - that particles can exist in multiple positions at the same time - was one thing; that
these positions are selected randomly was another. But this random selection of positions could, nonetheless, be
observed in the same experiment. Although the electron, in virtue of its wave-like form, can pass through both slits at
the same time, it does not remain in the form of such a wide reaching wave when it hits the electrosensitive screen -
despite the fact that an interference pattern still emerges. As shown in figure 3, the interference pattern builds up
after a whole population of white specks appears on the screen. These specks are where each electron - now,
apparently, in the form of a point-like particle - makes its mark upon hitting the screen, and the great majority of them
are concentrated within the regions covered by the bright bands, with fewer of them sporadically scattered between
these regions. What this tells us is that, until the electron hits the screen, it travels as a wave, but upon hitting the
screen, the wave "collapses" (a term we'll get to later) back into a point-like particle. But how does the electron
know where to collapse? How does it know where on the screen to hit? That's the million dollar question. Davisson
and Germer's experiment shows absolutely no discernable pattern in how the interference pattern evolves, except
that there is a greater probability that the electron will hit those regions within the light bands than in the regions
between and outside these bands.

Figure 3: Buildup of the interference pattern.

Randomness can also be shown by setting up a particle detection device at each slit. That is, if, in both slits, we
place a device that signals the presence of a particle (in this case, an electron) when it comes close to it, then we can
test the notion that the particle indeed passes through both slits at the same time. But what happens instead is that
only one of the devices detects the particle - that is, it seems as though the particle passes through only one slit.
Which slit this turns out to be appears to be random. Furthermore, only when such detection devices are setup like
this does the interference pattern disappear, and normal rectangular shaped blotches of points show up on the screen
instead. Their positions are, of course, in line with the slits - that is, it is as if the particle was point-like all along,
and thereby could stream only towards that region on the screen that it had access to via whichever slit it passed
through. What are we to make of these findings? The current interpretation of this phenomenon - an interpretation that
surfaced not long after 1927, when these kinds of experiments were conducted - was that a particle can only exist in
multiple locations simultaneously when its position is not being measured. Measure its position with a screen, a
detection device, or any other means, however, and the particle will settle upon that position and no other (what we
call "collapse"), and cease to travel as a wave (or at least, propagate as a wave starting over from a much more
confined region). Now, what "measurement" means in this interpretation is also subject to interpretation. Most
conservative interpretations take it to mean human observation only, but other more encompassing interpretations
take it to mean any physical interaction whatsoever - such as the electron bumping into something like the detection
device. Whatever the interpretation of "measurement", one thing was certain - what these measurements turned up
could not be predicted - they were random.

At this point, it still wasn't fully clear what the essence of these waves were. Were they energy waves? Were they
waves of "stuff"? Were they multiple instances of the particle under investigation propagating as a wave? Were they
beyond our comprehension? The most all-encompassing answer to these questions came in 1927 after Heisenberg
and Bohr convened their conference in Copenhagen, Denmark. The purpose of this meeting was to come up with a
workable and thorough interpretation of what the
thoughts, experiments, and mathematical breakthroughs in
The quantum theory of the 1920's had so far availed. Their
Copenhagen final consensus was dubbed the "Copenhagen
Interpretation Interpretation", and its claim was that these waves
were waves of probable positions. That is, if we take the
double-slit experiment, what was happening to the
particle wasn't so much that its position was dispersing

10 of 23 4/2/2010 3:46 PM
MM-Theory - Quantum Mechanics http://www.mm-theory.com/qm/qm.htm

itself in the form of a wave, but that it was becoming


undetermined. That is, unlike the regular, macroscopic
objects that we see everyday, things at the subatomic
scale are never at one particular position in space, nor
are they in multiple positions all at once - rather, they
have "lost" their position - that is, their position has gone
from definite to undefined. The more free they are from
measurement - and remember that "measurement" is
subject to interpretation - the more undetermined their
position. Their positions are never completely
undetermined, of course, since there is still a fuzzy
region in space where they are more likely to show up
when measured (the region swept by the wave) than
other regions, but they are never fully determined either - that is, a particle, although it may make a point-like mark
on the screen, never really collapses to an infinitely precise point. The collapse is random, of course, and this is
what justifies the model of the probability wave. Because the particle isn't really "somewhere", yet neither is it
"nowhere", the random collapse can be taken as an indication that its position is probable, and the region of space
that the wave sweeps represents the region of highest probability - that is, the region in which, if one were to
measure the particle's position, it most likely will show up.

This was the most revolutionary idea physics had ever subsumed, and as soon as the whole of the scientific
community got wind of it - and got used to it - a new discipline was born. Quantum mechanics was on the scene. The
dawning of this new science marked an undeniable break from classical mechanics. Schrödinger, with his new
equations, was comparable to Newton who set classical mechanics on a stable footing with his kinematic
equations. The most frequently cited of Schrödinger's equations is the "wavefunction", which describes the state of
a particle as it takes on the form of a wave. Another term that is often thrown around, usually accompanying the term
"wavefunction", is "collapse". Together, they are commonly expressed as the "collapse of the wavefunction", which
essentially refers to the mathematical description of a particle that has gone from a state of superposition to a more
Kinematic
localized state (i.e. going from a multitude of possible positions to fewer possible positions). It is easy to
Equations
misinterpret these terms as something physical or conceptual, when really they are mathematical. It is typical for
amateurs to think of the "wavefunction" as the physical wave itself, and "collapse" as the physical process of a
particle becoming less wave-like and more particle-like. But one has to keep in mind that the "wave" is only a
region in space where the probability of finding a particle is highest - in other words, there's nothing really there in
Schrödinger's the utmost physical sense. The "wave" is an abstract concept that finds its best expression in mathematics. This is a
Equations crucial point to keep in mind, for quantum mechanics was born from mathematics, unlike classical mechanics, which
was born from philosophy. Quantum mechanics, therefore, is first and foremost a mathematical system for describing
and predicting observable results of particle behavior. The primary difference between the math of quantum
mechanics and that of classical mechanics is that quantum mechanics gives us the probabilities that certain outcomes
will occur, whereas classical mechanics gives us certainties that these outcomes will occur.

Now that we have elucidated on the role superposition and randomness play in quantum mechanics, we can rephrase
the Heisenberg Uncertainty Principle in its most formal articulation. The way nature localizes a particle (i.e. gives it
a more precise location) is by superimposing various waves of different wavelengths overtop each other. The
greater the range of wavelengths, the more precisely the particle will be localized. When waves of varying
wavelengths are superimposed in such a manner, they tend to cancel each other out except at a specific point, the
point at which the particle has been localized. In other words, a particle that has been localized does not have an
exact wavelength - instead, it has a range whose boundary values taper off. The top portion of figure 4 offers a
graphical representation of this process. This range of wavelengths constitutes superposition in regards to
momentum. That is, because a specific momentum corresponds to a specific wavelength, a range such as this
corresponds to momentum in a state of superposition - in other words, the particle's momentum, when its position is
given a high degree of precision, is all the more undetermined. By similar reasoning, when a particle's momentum is
given a high degree of precision, its position goes into a heightened state of superposition. This is because in order
for momentum to acquire a more precise value, it must be constituted by a more precise wavelength - but this results
in the particle taking on more of a wave-like form such that its breadth spans a greater region of space, thereby
diffusing its position throughout that region. The bottom portion of figure 4 offers a graphical representation of this
process. As the reader can now see, momentum and position really are inherently mutually exclusive - so much for
pointing the finger at measurement.

11 of 23 4/2/2010 3:46 PM
MM-Theory - Quantum Mechanics http://www.mm-theory.com/qm/qm.htm

Figure 4: The Heisenberg Uncertainty Principle understood as inherent.

Another point this enlightens us about is that position is not the only property that can go into superposition. Although
the word "superposition" sounds like a reference to position, it is a misnomer in this respect. There are many other
properties that can go into superposition - momentum is just one. These properties usually come in pairs, and
sometimes in triples. We call these pairs and triples "conjugate variables". Position and momentum are the first
variables we have seen to be conjugate. Another pair is energy and time. A third is angular position and angular
momentum. One triple is the spin of a particle around the x-, y-, and z-axis. That is, the more precisely one measures
Conjugate
the spin of a particle around a chosen axis, the less precisely one can measure its spin around the other two. It is
Variables
important to note, however, that "spin", in this context, does not have quite
the same meaning as spin in the classical sense - that is, as a baseball or a
planet might spin - but since this makes no differences to our purposes, we
can think of spin as though the particle in question was a nanoscopic
Quantum Spin sphere resembling a billiard ball and it was rotating about the x-, y-, and
z-axis. Interestingly, Einstein, along with Podolsky and Rosen, found that
these three conjugate variables led to a paradox, but this also involved the
phenomenon of quantum entanglement, which we will get to later. Suffice
it to say, almost anything about a particle that one can measure bears a
Superposition certain degree of uncertainty - almost anything - for there are some
Without properties that have never varied across different measurements, such as a
Randomness particle's mass or its charge (but see sidenote ) - but other than that,
uncertainty plagues a great deal of things we used to take for granted as
having definite values without our even knowing.

The lesson to be learnt here, a lesson that carried through the decades and still rings loudly within physicist circles
today, is that as we conduct our experiments and take our measurements, the objective being to gain ever more
precise and plentiful knowledge of the state of our world, we unavoidably change this state. The answers we seek in
this endeavor are true only for the instant these measurements are taken, and thereafter that which we measured has
been changed to something unknown. This is a radical shift from the classical worldview in which it is taken for
granted that the state of the world remains the same before, during, and after we conduct our experiments and take
our measurements. We used to fancy ourselves to be independent, autonomous beings whose relation to the world is
to observe it from a non-participating standpoint. We used to think that however we involve ourselves in the world
in order to measure it and test it experimentally, we do so without disturbing it in the least. Indeed, from this vantage
point, so long as our measurements are sufficiently precise and our experiments conducted with the utmost care, the
knowledge gleaned from these labors would be perfect. The legacy quantum mechanics leaves us with, however, is
that our knowledge of the natural world can never be perfect. As soon as it's acquired, something else in the world is
changed. If we knew it beforehand, we know it no longer thereafter.

Entanglement, Cats, and Other Paradoxes


This was the view taken up by scientists as the twenties merged into the thirties, and for the next few decades, this
view held its own. Debates carried on and challenges arose, of course, but quantum mechanics survived it all. One
challenge worth noting, one proposed by Schrödinger of all people, was the paradox known as "Schrödinger's cat".
Schrödinger's
The term "paradox" is a bit of an overstatement in this case since it technically doesn't demonstrate an impossibility

12 of 23 4/2/2010 3:46 PM
MM-Theory - Quantum Mechanics http://www.mm-theory.com/qm/qm.htm

or contradiction in quantum mechanics, but it does bring its counterintuitive nuances glaringly to the fore.
Schrödinger proposed a thought experiment: suppose his cat was
put in a closed box along with a counter tube, some hydrocyanic
acid (a poisonous gas), and a radioactive substance. The
radioactive substance is prone to "quantum effects" such as
superposition - meaning that it has a 50/50 chance of decaying
within a given amount of time. Whether it decays in this time is
inherently random - nature herself will decide. If it doesn't decay,
the cat will live. But if it does decay, the counter tube will
activate, releasing the poisonous hydrocyanic acid, and the cat
will die (don't worry, it has eight more lives ). But because the
radioactive substance is never measured, it remains in a
superposition state throughout the whole process. In fact, nothing
is measured inside the box, nothing is observed. This means that
the counter tube exists in a state of superposition as well, being
Cat activated and staying dormant at the same time. The gas is also in
a state of superposition, being released and not released at the same time. Finally, the cat is in a state of
superposition - it is dead and alive at once. Now, if a hardnosed quantum physicist really wanted to stick to her guns,
she could deny that this paradox disproves anything. Technically, it doesn't defy any of the tenets of quantum
mechanics or their logical consequences - but she would have to have some radically different reactions to what
most consider strange and counterintuitive, so much so that it often feels more natural to doubt the reality of
Schrödinger's thought experiment. This is exactly what Schrödinger had in mind when he published his paradox in
1935, but unfortunately for him, most quantum physicists stuck to their guns despite how hard it was to swallow.
After all, this new breed of scientists were far less interested in the true nature of reality, and more interested in
what they could observe and measure. They preferred not to speculate on the state of Schrödinger's cat while the box
was closed and no one could see it. The only reality to them was what could be seen once the box was opened. If the
question of the cat's state before that time had to be answered, they opted to describe it as a superposition state -
dead and alive at the same time. But to the layman, this paradox could not be accepted. No one, other than the
quantum physicists, could believe that an object the size of a cat, or anything observable to the naked eye, could exist
in a state of superposition.

And as if these intellectual hurdles weren't challenging enough, in 1935, Einstein, Podolsky, and Rosen published an
article in which they presented the Einstein-Podolsky-Rosen Paradox (or EPR-Paradox for short) - this time a
real paradox - according to which, considering the phenomenon of quantum entanglement, one could
simultaneously determine at least two of three conjugate variables. Quantum entanglement is a term that refers to a
unique type of influence one particle can have on another so long as they are "entangled". In its most general sense,
"entanglement" is a type of relation two or more particles might have with each other - namely, that they have had
effects on each other at one time. In a more specific sense, "entanglement"
means that two particles were born from a common source in such a way
that the states or properties of one determined the states or properties of the
other. For example, when Einstein and his collaborators published their
paper in 1935, it was known that certain particle pairs can be created from
a single event or source. For example, the J/Ψ (psi) particle can decay into
an electron and a positron with opposite spins from each other. That is, if,
for example, the newly created electron has spin 1/2, the newly created
positron will have spin -1/2. What this means, Einstein and company
Einstein-
realized, was that if a pair of particles that are entangled in a manner such
Podolsky-Rosen
as this are separated from each other by a significantly large distance, one can take one of the particles and, on it, do
Paradox (EPR
measurements of one conjugate variable, and on the other particle, do measurements of another variable of the same
Paradox)
conjugate group - thereby, knowing their values simultaneously. For example, we've seen that spin around the x-, y-,
and z-axis, are conjugate triples. Therefore, by Heisenberg's Uncertainty Principle, one should not be able to glean
any information on the spin values for two of the three axes, so long as one measured, with high precision, the spin
value for the third remaining axis. But if two particles were created such as the electron and positron from the J/Ψ
Quantum particle, one could separate them by a vast distance, measure the spin of (say) the electron about the x-axis, and
Entanglement simultaneously measure the spin of the positron about the y-axis. Considering that we can predict the spin value of
one particle about a given axis, knowing the spin value of the other particle about the same axis, our knowledge
about the spin value of the electron about the x-axis gives us reliable knowledge of the spin value of the positron
also about the x-axis - namely, it will be the negative of the other. Likewise, our knowledge about the spin value of
the positron about the y-axis
gives us reliable knowledge
of the spin value of the
electron also about the
y-axis - again, the negative
of the other. Therefore, we
can know the spin values for
both the x-axis and the
y-axis for the same particle,

13 of 23 4/2/2010 3:46 PM
MM-Theory - Quantum Mechanics http://www.mm-theory.com/qm/qm.htm

and this is true for either


particle, the electron or
positron. But the spin about
the x- and y-axis are
supposed to be conjugate
variables - and so cannot
both be known
simultaneously. Therefore, Einstein, Podolsky, and Rosen were convinced that they had found a flaw in the logic of
quantum mechanics - or at least, the Heisenberg Uncertainty Principle.

Einstein's primary motive in pushing this argument was to show how the states of these particles, and therefore the
states of any physical phenomena that aren't being observed or measured, have to be determined beforehand - that is,
without being measured at every moment. To see how this argument works, one needs to understand Einstein's
conviction that nothing travels faster than light. Physicists were also very eerie about "spooky action at a distance"
as they called it wherein one physical phenomenon affects another physical phenomenon instantly across vast
distances of space - that is, without anything from the former traveling the gulf between them and physically
interacting with the latter. Together, these two principles entail that no physical system, spatially distant from another
physical system, can affect that spatially distant system in a shorter amount of time than it takes light to travel from
the former to the latter. Therefore, if the spin of either particle about a given axis is not known beforehand, and if this
epistemic lack entails that this spin is undetermined and in a state of superposition, then when one finally does
measure the spin, gaining knowledge of it, this shouldn't be able to determine the spin of the particle's entangled
partner instantly - something from the measured particle, some information at the very least, must travel the expanse
of space separating them, and the fastest it can do so is the speed of light. But if we know beforehand exactly what
that spin value must be, based on the measurements we took of the original particle, then we've gained knowledge of
something before it is determined - before it is true. There's no question this is paradoxical, and Einstein felt it
proved there was something flawed with the picture of non-determinism in the ontological sense. Einstein realized,
of course, that the prediction of the spin of the non-measured particle could not be verified by observation or
measurement unless one trekked across the expanse of space separating him/her from the non-measured particle.
Since this cannot be done faster than the speed of light, any information that might make its way from the measured
particle to the non-measured one, information that could determine its state, could get there before he/she does - or at
the very soonest, at the same time that he/she does. However, a slightly more sophisticated way to construe the
EPR-Paradox is to imagine that two observers take simultaneous measurements, one per particle, and then reunite
half way between to compare results. In this case, the observers could conceivably meet up with each other in
shorter time than it takes information to travel from one particle to the other. The consequence would be either that
their results adhere to what we know about the particle pair production of the J/Ψ particle yet violate Einstein's
cosmic speed limit, or that their results violate what we know about the particle pair production of the J/Ψ particle
yet adhere to Einstein's cosmic speed limit. In either case, a law of nature seems to have been broken.

If you want to fix a paradox, what you need to do is reconsider at least one of the premises - that is, one of the
assumptions you take to be true and upon which the paradox depends - and although the indeterminism of particles
when they aren't being measured was the one Einstein had his scornful eye on, defenders of quantum mechanics had
other premises to choose from. For example, one could argue, and some did argue, that it was really Einstein's
cosmic speed limit that has to be abandoned - that is, they argued that quantum mechanics proves that sometimes
things can travel faster than light (although, in the case of quantum entanglement, nothing would be accelerating to
Local Realism
the speed of light - it would be an instantaneous effect from the start). Einstein was adamantly against the notion of
indeterminacy in nature, ironically since he played such an important role in promoting Planck's original quantum
hypothesis, preferring to believe in "local realism" instead. Local realism is the idea that no physical system can
affect another physical system unless it comes into contact with it by traversing every bit of space between them.
This is not surprising since his theory of relativity depends on local realism. Unfortunately
Theory of for Einstein, in 1964, John Bell proved that nothing could be traveling from the measured
Relativity particle to its entangled partner. What John bell proved, more generally, was that no "hidden
local variable" theory, as they were called, could account for quantum entanglement or the
EPR-Paradox. A Hidden Local Variable Theory is a theory that attempts to salvage the old
classical views of physics by supposing that there are indeed physical and local factors
Local Hidden
involved in any "quantum phenomena" - that is, the phenomena of randomness or superposition
Variables
of particles - but that these elements are hidden. It is therefore assumed that these hidden
variables can account for quantum phenomena in such a way that, although we may not as yet
understand how, classical mechanics is upheld. Bell came up with a formal proof that this
could not possibly be the case. How he proved this is not important for our purposes, but the
effect Bell's Theorem, as it became known, had on this debate is important. The effect was that the great majority of
Bell's Theorem physicists took it as proof that some things can have instantaneous effects on other things despite the fact that they
may be separated from each other by vast amounts of space.

Quantum entanglement takes us a bit far from the original subject matter that the roots of quantum mechanic's stem
from. Instantaneous effects across great distances is indeed intricate to quantum mechanics, but when most experts
think of quantum mechanics, they think, first and foremost, about superposition and randomness - and, of course, the
idea that started it all, Planck's hypothesis of energy quantization. We have touched upon the Copenhagen

14 of 23 4/2/2010 3:46 PM
MM-Theory - Quantum Mechanics http://www.mm-theory.com/qm/qm.htm

Interpretation, the first attempt, by Heisenberg and Bohr, to account for all these things. But over the years, other
interpretations have surfaced, interpretations that contend with the Copenhagen one, and we need to look at the most
salient ones before closing the subject. But even the Copenhagen Interpretation needs elaboration, and so we will
begin the next section with this one.

Interpretations
The most unsavory thing about the Copenhagen Interpretation was its obstinate refusal to say anything about reality as
it actually is. Both Einstein and Schrödinger actively worked against this departure from the original mission of
science that quantum physicists were taking. That is, Einstein and Schrödinger believed, in accordance with the
views of classical physics, that the purpose of science was to reveal to mankind the nature of reality in its truest and
most objective form. What quantum physicists seemed to be interested in, on the other hand, was what could be said
only of the data collected, because it was only the data that could be observed and measured. The lesson learnt from
the nineteen twenties was that nature really does some very bizarre and absurd things when we aren't looking. In that
light, the prospect of ever gaining any knowledge of nature in her unobserved state seemed quite futile. She is too
much affected by human measurement, they understood, and so measurement could no longer serve the purpose of
exposing nature's true form. Instead, they unanimously agreed, measurement must be used to gather data that can be
put towards constructing a mathematical picture of the world as nature wants us to see it. If these quantum
physicists were to speak of nature at all, it would be within the framework of this data-constructed world of human
observation. If they were to speak of nature outside this framework, they would prefer to stay silent. If they felt
compelled to say something, they would speak in terms of probabilities. But even then, it would be put in terms of
measurement - that is, they would speak of the probabilities of what their measurements would yield if they were to
make observations. They felt that to speak otherwise - that is, to speak without making reference to probability,
measurement, or observation - was to defeat the purpose of speaking - at least, if they were speaking as professional
physicists. Yet, for those who didn't understand the reasoning behind this shift in the manner of speaking, and for
those who believed we deserved more than to sit in a shroud of ignorance about nature, like Einstein and
Schrödinger, the Copenhagen Interpretation, out of which all this measurement-speak grew, was completely
unacceptable. Outside the tightly knit circle of quantum physicists, deep stirrings of dissatisfaction lingered, and
from this emerged several competing interpretations of quantum mechanics.

Before we get into these other interpretations, however, let's flesh out some possible misconceptions the reader
might have about the Copenhagen Interpretation. For example, at the heart of the Copenhagen Interpretation is the
concept of the "probability wave", but this wave was never meant to be taken as something literally "out there". It's
only a mathematical description for understanding the probabilities of what one will find when taking measurements.
To really understand the Copenhagen Interpretation, one needs to realize that it says absolutely nothing, nor does it
purport to say anything, about reality. The only sense in which this is not entirely true is that it is a statistical model
built from the data of real-world experiments and measurements. But concerning questions addressing the general
nature of the real world as it always is, the Copenhagen Interpretation doesn't add much. This is important to note,
not just for understanding the Copenhagen Interpretation properly, but for understanding how other interpretations
don't have to be seen as competing interpretations in the full sense of the word. That is, because the Copenhagen
Interpretation stays silent when it comes to questions of reality, one might welcome other interpretations as answers
to these questions without rejecting the Copenhagen one. Strong adherents to the Copenhagen Interpretation might
feel compelled to object - any interpretation addressing reality, they might say, is speculative at best, and cannot fall
under the rubric of hard science. They would be right, of course, since anything that claims to be science must
somehow subject itself to measurement and testing. Given the quantum mechanical principle that measurement and
testing affect the phenomena under study (or at least, that we cannot know whether it does or doesn't), no such
interpretations could ever pass as scientific. This may be true, and we would be in trouble, therefore, if we wanted
to present such interpretations as scientific. This is not our goal, however, and we would not be in conflict with the
Copenhagen Interpretation if we take these interpretations seriously as philosophical models of how the real world
works when she isn't being measured.

A consequence of this non-speculative stance the Copenhagen Interpretation takes is that it could be misconstrued as
implying that consciousness bears a causal relationship to wave-like particles - namely, that consciousness causes
them to collapse. This is not what the Copenhagen Interpretation says, of course, but if it were misunderstood as
making ontological statements, this would appear to be the conclusion it draws. Therefore, it is not uncommon to
hear it said that consciousness causes collapse, but this needs to be recognized as a different interpretation.
Presumably, by this interpretation, all properties of all things that can exist in a state of superposition do exist in such
a state, except when they are exposed to consciousness. When this happens, the superposition state collapses into a
more classical state (or close enough to classical for consciousness not to notice a difference). This view also holds
that only consciousness collapses the wavefunction. This makes sense considering it is an ontologically oriented
version of the Copenhagen Interpretation. That is, because the Copenhagen Interpretation refrains from speculating
on whether anything beyond the scope of observation could ever collapse the wavefunction, reserving such
statements for consciousness alone, the ontologically oriented version takes this to mean that only conscious
observations cause the wavefunction to collapse. Whether consciousness does this knowingly and out of free choice
is anybody's guess, and one would have to seek the opinion of a particular individual who holds this view. Whatever

15 of 23 4/2/2010 3:46 PM
MM-Theory - Quantum Mechanics http://www.mm-theory.com/qm/qm.htm

the case, however, this interpretation, like many others we will touch on, can't be tested scientifically.

It is perhaps best, therefore, that the Copenhagen Interpretation was the first to emerge, and promoted by some of the
most ardent positivists in the field. Positivism is a semantic theory which states that the meaning of our statements
and concepts can, and ought to, be put in terms of how one would go about verifying them empirically. So, for
example, what it means to say that a wire has an electric current running through it is that if you applied a voltage
meter to it, you would see it spike. Needless to say, positivism and empiricism, an epistemic theory that says we
know the world by observing it, go hand-in-hand. If it cannot be observed, in other words, we not only have no
knowledge of it, but it hasn't even a meaning, and thus makes little sense to talk of its reality. In this way, quantum
mechanics, and thus the rest of science, escaped possible degeneration into what hardnosed positivists abhor most -
metaphysics and crackpotery. The findings of the nineteen twenties that led to the birth of modern quantum mechanics
could have easily led elsewhere. These discoveries were so bewildering and mind blowing that it opened the
floodgates for all sorts of novel and outlandish speculation to rush in. It was clear that classical mechanics was
being overturned, but what would take its place was not immediately obvious. There was ample opportunity for
something more ontological in its orientation, as compared to the Copenhagen Interpretation, to come first -
something that spoke to the more deeply seeded preferences of most people, including many scientists as much as
they might deny it, to know reality as it actually is. If such an interpretation did come to light before the Copenhagen
one, it might have been latched onto and quickly ushered into the position of the formal stance the scientific
community would take on the question of what quantum mechanics meant. If this happened, it might have been too
late for the Copenhagen Interpretation, and it's reasonable to doubt that it would have made much headway. This
Positivism
would have been a disaster for positivists and empiricists the world over, and they wouldn't have been overreacting
- not necessarily at least. Science is at her best the more she abstains from unfalsifiable speculation. Not that
speculation doesn't have its place, and brings science through in times of need, but when such speculation is
unfalsifiable (i.e. cannot be tested), to accept it as science is to devalue science, and it leans all the more close to
metaphysics and opinion. Although there is nothing wrong with metaphysics and opinion, not by my standards
anyway, they are not science. Science should remain science. It is one of our most important and productive
institutions, and serves a unique and vital role for humanity. If science ceases to be science, we lose an essential
tool, and we take an enormous step backward after all the progress it has helped us achieve. Therefore, although the
Copenhagen Interpretation leaves something to be desired when questions about reality are raised, we ought to
respect it and be grateful that it was the first and most tenacious of the interpretations to account for the anomalies of
quantum mechanics. Having said that, we do want to press on with our inquiries into the nature of the real world, and
looking at a few of the most prominent interpretations in the field, other than the Copenhagen one, will surely help us
in this task.

Following close behind the Copenhagen Interpretation in popularity is the "Many Worlds Interpretation". Hugh
Everett, who first proposed it in 1957, called it the "Relative State Formulation" - hinging on the relation between an
observer and the phenomenon observed - and it was only three years later that Bryce Dewitt figured it deserved the
title "Many Worlds". The central difference between the Many Worlds Interpretation and the Copenhagen one is that
the former attempts to do away with non-determinism. It does so by replacing the collapse of the wavefunction with
"decoherence". To understand decoherence, it is useful to imagine that superposition consists of multiple instances of
the object under consideration. That is, for example, if it is a particle's position that is in a superposition state, then
we can imagine that multiple instances of the particle coexist, each taking a unique position in, and exhausting, the
region covered by the superposition state (we imagine this with caution, of course, for it too could pass as a
speculative and unsupported interpretation). So long as each and every instance coexists in the same superposition
state, we'd say that
they "cohere" with
each other. The
moment one takes
a position
Many Worlds measurement,
Interpretation however, the
superposition state
"decoheres" - in a
crude sense, they
break from each
The Multiverse other. More
specifically, at
least one instance,
the one whose position was obtained by the measurement, decoheres from the rest. What happens at this point is that
the universe "splits" into multiple copies of itself. One offspring universe inherits the instance whose position was
captured by the measurement, while all others inherit the remaining instances, one for each. In essence, decoherence
is the branching of the universe whereby each houses a different measurement outcome, and no instances are left in a
state of superposition. Therefore, by this interpretation, all possible outcomes actually do occur - not all in the same
universe, of course, but in a greater realm of existence that some call the "multiverse". So the wavefunction never
collapses - it decoheres instead. With no collapse, the outcomes aren't really random. If all outcomes occur, no
particular outcome is being selected at random. There is the question of what the observer taking the measurements
observes, and at first it may seem as though he/she is being randomly paired up with one particular outcome, but if

16 of 23 4/2/2010 3:46 PM
MM-Theory - Quantum Mechanics http://www.mm-theory.com/qm/qm.htm

we keep in mind that even the observer is split into each offspring universe, each instance of him/herself asking the
same question - "Why this outcome?" - then it seems much less random after all. That is, since each and every
instance of the observer gets a unique outcome, and all such outcomes are exhaustively assigned to each observer
instance, there's nothing blatantly random about one particular observer getting one particular outcome. That outcome
must necessarily be measured by at least one observer.

The Many Worlds Interpretation has been extended to


other unexplained phenomena. In particular, it has been
suggested that not only does our universe split into
offspring, but that these offspring can exchange particles
and energy with each other. One phenomenon this
accounts for nicely is quantum tunneling. Quantum
tunneling can be seen when a particle that, under
ordinary circumstances, would not be able to penetrate a
barrier, say a thick sheet of metal, with the low amount
of energy it has, is suddenly found on the other side of
Quantum the barrier. By the principles of classical mechanics, the
Tunneling only way the particle could do this is if it was given the extra energy it needed to overcome the forces holding the
barrier together. But it would also rule this out if no source was available to provide this energy - nothing acquires
energy spontaneously or out of nothingness, it would say. Nonetheless, this is indeed what seems to be happening
with quantum tunneling. The particle, suddenly and randomly, acquires the energy necessary to pass through the
barrier. An interpretation based on the Many Worlds view could
The Proper say that the particle "borrowed" this energy from a parallel
Account of universe, and after putting it to use, returned it from whence it
Quantum came (but see sidenote ). Another phenomenon this exchange
Tunneling concept accounts for is the apparently spontaneous creation and
destruction of "virtual particles". These are particles that seem to
come into existence and disappear as swiftly as they came. They
are called "virtual" because they exist far too briefly for anyone to
measure or confirm their existence by experimental means
Virtual
(although scientists do have ways of testing for their effects). If
Particles
parallel universes did exist along side ours, and if they can
exchange particles and energy with ours, it is not unthinkable that
these virtual particles are simply passing through our universe on
their way to another. These are just some of the ways the Many
Worlds Interpretation proves its versatility, and accounts for the
longevity and prevalence it has enjoyed among thinkers, scientists
and non-scientists alike.

One variant on the Many Worlds Interpretation is the "Many Minds Interpretation". According to this
interpretation, it is consciousness that splits rather than the universe itself. The Many Minds Interpretation is midway
between the Many Worlds Interpretation and the radical form of the Copenhagen Interpretation wherein
consciousness alone collapses the wavefunction. The Many Minds Interpretation holds the same notion as the latter -
that everything is constantly in a state of superposition - but it differs in the role it attributes to consciousness.
Consciousness doesn't collapse the wavefunction, according to the Many Minds Interpretation, and nothing really
does. The entire universe is always in a maximal superposition state. What splits instead are the individual minds of
each and every observer. They split for every measurement they
make - and in this context, any observation of the physical world
counts as a measurement. When consciousness observes one
particular outcome to the exclusion of all others, it does so
parallel to an infinitude of copies of itself, each one observing a
different outcome drawn from the same superposition state of that
which it observes. What this view capitalizes on is the lack of
need to assume that any sort of split occurs in the physical world.
Many Minds If one simply assumes that the universe is one grand wavefunction
Interpretation (i.e. it is always in a state of absolute superposition), then every
observer in every time and place is observing all possible
outcomes simultaneously. Although it is impossible to imagine
observing all possible outcomes simultaneously, one need not
imagine that all observations are taken in by the same
consciousness. Each observation instance is matched up with one
whole consciousness. It follows that each consciousness would be
unconscious of any of its counterparts in the superposition of all
minds observing the same physical phenomenon. If we take any
one of these mind instances and follow its path as it goes on to
observe other physical phenomena - phenomena that, unbeknownst
to the mind in question, exist in superposition states - we would find that, the moment it finally observes the

17 of 23 4/2/2010 3:46 PM
MM-Theory - Quantum Mechanics http://www.mm-theory.com/qm/qm.htm

outcomes of these other phenomena, it splits yet again. This is not the physical phenomena themselves splitting, as
the Many Worlds Interpretation would have it, but the consciousness observing them splitting, each offspring pairing
up with one instance among the superposition of physical phenomena.

Some "single world" interpretations, as they might be called, also employ the concept of decoherence. It has been
suggested that decoherence occurs with any physical interaction whatsoever, not just human measurement. This
contrasts sharply with the ontological version of the Copenhagen Interpretation - the one that attributes the collapse
to consciousness in a fully causal sense. If there were no conscious beings, the latter interpretation says, all
properties of all things capable of going into superposition would be in superposition to the utmost extreme - that is,
the Sun and the Moon would be absolutely everywhere, as would all other planets and stars, all particles and
midsized objects, and every other physical thing in the universe. And even with the existence of conscious beings, so
long as none of them are aware of the states of all these things, they persist in their extreme form of superposition.
This is a very hard mouthful to swallow for many, as it goes against every fiber of common sense we have. For this
reason, many welcome interpretations that endorse decoherence for any physical interaction whatsoever as fresh
alternatives. One drawback to these interpretations, however, is that they are often vague on what constitutes an
interaction. After all, if position is one of the most salient properties to go
into superposition, it's hard to imagine how physical interactions ever take
Particles Don't place. That is, for instance, if one particle enters the vicinity of another
Touch particle, and the positions of both are in a heightened state of superposition,
there is no precise point at which they can be said to be in contact with each
other (see sidenote ). Therefore, what are we to say? Are they impinging
on each other? Are they just whizzing by each other? Are they interacting in
some other manner? The fact is, because of the "fuzziness" that quantum
mechanics introduces into physics, the exact meaning of a physical
"interaction" must be reconsidered. It is not clear what it consists of or what
brings it about. In fact, the most plausible interpretation is, like the collapse
of the wavefunction due to measurement, that it occurs randomly. Nature
whimsically decides: "Yes, these particles will interact."

It is assumed that particle interactions are what keep electrons in their orbitals around nuclei. By way of the
attractive force between protons and electrons, the electron's position collapses, or decoheres, to within the small
orbitals that stick ever so close to the bundle of protons that make up the nucleus. Other forces would also be
involved in the collapse, or decoherence, of these protons and the accompanying neutrons to within the nucleus. This
would be the case for all such forces. Therefore, this interpretation explains very well why particles don't go
propagating in all directions when they are bound together by the many forces that hold atomic structures together.

And what happens, in single world interpretations, to the many particle instances after decoherence - that is, the
instances that are measured by one observer and those that aren't? There are no extra worlds for the non-measured
instances to go into. Well, it is often said that the non-measured instances "dissipate" into the environment. What
"dissipate" means in this context can have different meanings depending on whose interpretation you consult, but in
general it means that all other instances of the property or state you're interested in measuring have become lost or
inaccessible to measurement. The wave has become "diluted", so to speak, in the surrounding environment - it has
become mixed up and blended into the wavefunction of all other particles constituting the immediate environment.

So far, none of these interpretations have dealt with the


wave/particular duality of matter and energy in a way other than
from a probabilistic perspective - that is, a model that explains the
wave-like nature of particles in terms of probable positions
spread throughout a region of space. There is one interpretation,
however, whose account of wave/particle duality brings quantum
mechanics back to the old classical view wherein particles are
just particles, waves are just waves, and never the twain shall
meet (well, maybe I should say never the twain shall be one, for
The Bohm
they do meet ). This is the Bohm
Interpretation
interpretation, presented in 1952 by David
Bohm. Bohm suggested that there isn't just
one entity that sometimes exists as a particle
and other times as a wave, but two entities -
one always existing as a particle and one
always existing as a wave. Every particle,
David Bohm Bohm says, is always accompanied by a
"pilot wave". A pilot wave is a wave that guides the particle in its trajectory. It thus lays out
the options the particle has for taking on one position or another. The particle can exist
anywhere within the region covered by the wave. Although Bohm proponents claim that their
model captures the same deterministic flavor as classical theories, they have not been able to make predictions,
vis-à-vis the outcomes of our measurements, with any more reliability than with any other interpretation. It is simply

18 of 23 4/2/2010 3:46 PM
MM-Theory - Quantum Mechanics http://www.mm-theory.com/qm/qm.htm

posited that the pilot wave carries out some obscure algorithm when deciding the properties of the particles it
guides. On the other hand, the Bohm interpretation deals with superposition quite effectively. All there is, the Bohm
interpretation says, is a particle and a wave, and the form these take - at least the particle - is perfectly consistent
with the classical worldview.

Finally, there is the Orchestrated Objective Reduction Interpretation or Orch OR


for short. First proposed by Roger Penrose in 1989, this interpretation says that
there is an upper limit to how extreme a state of superposition can be. That is, the
possible values a property like position, momentum, spin, energy, time, etc., can take
on can never be equally probable across an infinite range. Quantum physicists
generally agree that these properties can take on any value whatsoever, but that the
The more extreme these values, the less probable. For example, in the double-slit
Orchestrated experiment, although there is a chance that the particle might be found on the opposite
Objective side of the galaxy from the laboratory, the probabilities of this are infinitesimally
Reduction small, and that the bulk of the probability lies between the particle emitter and the
Interpretation screen. The Orch OR model adds that the universe enforces this inequality of
probability distribution by setting an upper limit on how far and wide the wave can
propagate. This limit is taken to be a universal constant much like the speed of light or the charge of an electron. If
superposition ever hits this ceiling, it automatically collapses to a more precise state. So, for example, if we
removed the screen from the double-slit experiment, allowing the particle to propagate indefinitely, the Orch Or
model predicts that it would eventually collapse into a less varied state regardless of whether it interacted with
Roger Penrose anything or was somehow measured. Its travels wouldn't end there, of course; it would go on propagating as a wave,
but it would have to begin over again, or at least from a state it had previously surpassed.

Penrose extends this idea even further. With his Orch OR model, he builds a bridge between physics and
psychology. He says that what causes the wavefunction to collapse is conscious decision making - that is, free will.
He is effectively killing two birds with one stone with this assertion, accounting for both superposition and the
randomness of collapse. Every instance of an entity in a state of superposition, he says, is actually having a
conscious experience. This experience is the impetus for a decision that the entity is about to make, and when it
finally makes this decision, either in response to the universal limit or some interaction with the environment, this
corresponds to the decision being made. For instance, a particle in a state of superposition with respect to its
position is actually in the midst of contemplating what position to take. Since it hasn't taken a position on the matter,
so to speak, it doesn't have a position just yet. The most highly concentrated region of the probability distribution
represents the choices it is leaning towards. When it finally settles on a choice, it collapses into the position so
chosen. Collaborating with Penrose on this theory is Stuart Hameroff, an anesthesiologist who argues in favor of
Penrose's interpretation by demonstrating quantum effects at the level of whole neurons. Essentially, the gist of this
argument is that the Orch OR model can explain human consciousness when we consider that neurons exhibit the
same quantum processes Penrose's model accounts for. Of course, no one has observed neurons in superposition
states - the mere idea is an oxymoron according to quantum theory - but Hameroff claims that there is indirect
evidence for this. We will explore this idea in more detail in the paper Determinism and Free-Will.

The above are just a few of the numerous interpretations on quantum mechanics. If the reader feels I have overlooked
some that are just as worthy of note as the ones above, I apologize with my only excuse being that there are just too
many, and to go through them all would fall beyond the scope and purpose of this brief overview. What we have
touched on above is more than enough background for the reader to take in as he/she moves onto other papers in this
website. The only thing that remains is to explicate the position we are taking in this website - that is, which
interpretation we deem to be the best. We will do so by evaluating each one on their strengths and their weaknesses,
and in the end, picking the one that makes the most sense.

Evaluations
The way we will do this - evaluate each interpretation, that it - is by adhering to two criteria. First, we will lean
towards the interpretation that best suits MM-Theory. This will not be easy for those interpretations that involve
randomness, for randomness, as we will see in the paper Determinism and Free-Will, is detrimental to a theory like
ours, depending on a deterministic framework for the universe as it does. We will resolve this problem in the
aforementioned paper, but for now we will have to accept the possibility that even the most congruent interpretation
will turn out to be troublesome if it includes randomness. Now, just to ensure that we aren't being circular in our
reasoning - that is, judging these interpretations based on our theory, and not the other way around as it should be -
our second criteria will be that the best interpretation must be backed by the strongest justifications. Therefore, we
will judge it as we would any philosophical theory - by its internal consistency and the plausibility of its claims. We
ought to recall that these interpretations concern the true nature of the world when we aren't observing it, and
therefore to take a philosophical approach to them would be more fruitful than a scientific one. They all agree on the
data gathered from mountains of experimental evidence - they differ only in the speculations one is inclined to make
after having seen the data. In short, we will assess their pros and cons based on their own merits, but with an

19 of 23 4/2/2010 3:46 PM
MM-Theory - Quantum Mechanics http://www.mm-theory.com/qm/qm.htm

inclination towards supporting our theory.

It may seem ironic, therefore, that we will refrain from judging the Copenhagen Interpretation at all. Why is this?
Because, as we have seen, the Copenhagen Interpretation says nothing about reality whatsoever - it is simply a
mathematical system that describes, quite accurately, the probabilities of the outcomes we observe in quantum
mechanical experiments. This is just factual, not speculative. It was quite deliberately setup to allow others (i.e.
non-scientists) to venture whatever guesses they felt were most probably true of the real world, which includes us of
course, without seriously threatening the coherency and merit of the Copenhagen Interpretation itself. That is, it
allows us to posit our theory as a possibility for what might really be the true nature of the universe (albeit, still
needing reconciliation between determinism and randomness, but we will address that in another paper). Therefore,
there is no need whatsoever to evaluate the logical worth of the Copenhagen Interpretation as it doesn't speak for or
against our theory.

So let's look at some of the other interpretations. Let's start with the ontologically oriented version of the
Copenhagen Interpretation wherein consciousness causes collapse. Although the difference between the conception
of consciousness held by this interpretation and that held by our theory is glaring, we will not argue over whose
conception is the right one. Instead, we will examine this interpretation on its own ground, assuming, for the moment,
that consciousness is as any classical or conventional notion would have it. The first problem that comes to mind is
that, in order for this interpretation to work, consciousness would have to keep track of an enormous amount of
information. It could not choose for superposition to collapse into any random state - rather, it seems that
consciousness is bent on collapsing the world in accordance with classical mechanics. For example, suppose that
late one night, as I look out my bedroom window, I see a full moon sitting just above the east horizon. I go to bed,
and the next morning, as I walk out the door to go to work, I look up and see the moon sitting just above the west
horizon. According to the ontologically oriented version of the Copenhagen Interpretation, the moon in both cases -
when I saw it the previous night and when I saw it this morning - was in a state of absolute superposition prior to my
seeing it, taking on all possible positions at once, and it was due solely to my looking at it that it collapsed into the
singular positions in which I saw it (of course, there are other people on Earth capable of collapsing the moon's
position with their own gaze, but for the sake of this thought experiment, we'll pretend I'm the only one looking at the
moon at those moments ). Yet, I'll stake my life on the prospect that if I were to do the proper calculations,
consulting astronomic charts and all the physics textbooks I can get my hands on, I'll get results confirming the exact
location at which I saw the moon this morning. That is, I'll bet the moon's position this morning can be explained
accurately enough by the laws of classical mechanics. So if my consciousness was solely responsible for the
position of the moon upon collapse, then somehow, perhaps unconsciously, I would have to be doing mountains of
math and physics, starting with its initial position that I saw last night, to figure out where it should be when looking
at it this morning. I would have to be doing this for absolutely everything in the universe I could potentially lay my
eyes upon, which would be an enormous task for my consciousness to take on. The volumes of information to keep
track of would be staggering - too much for any one mind to handle.

Secondly, it stands to question why the mind would apply this


classical or Newtonian scheme to the great majority of run-of-
the-mill phenomena, like the moon, but not to the occasional
quantum experiment like the double-slit one. Asking this question
another way, if consciousness collapses the wavefunction in
virtual accordance with classical mechanics, why does it not do
so in the experiments that led to this very interpretation? Well, one
obvious reply to this is that the wavefunction always favors
certain outcomes over others. In the case of the double-slit
experiment, the wavefunction favors collapse within the regions
covered by the light bands of the interference pattern. Putting this
in terms of the consciousness-causes-collapse interpretation, it is
somehow in the nature of consciousness itself that the more
probable outcomes are preferred over the less probable ones. We
can carry this reply over to the case of the moon's position, or any
other phenomenon in the universe, saying that outcomes in
accordance with classical mechanics are just extremely probable
and that any other outcome deviating from this by only minute amounts become extremely improbable. The problem
with this reply, however, is that it presupposes other factors besides consciousness involved in the collapse. If
consciousness really is the only thing responsible for the collapse, then when no one's looking at it, the moon should
proceed in its orbit exactly like a subatomic particle, propagating in various directions like a wave. The probability
of where we will find it should be equally similar to that of a particle - perhaps even exhibiting an interference
pattern if there so happen to be two gigantic slits and an enormous screen in the cosmos. In other words, we
shouldn't always see the moon in locations consistent with classical mechanics - not all the time. So what keeps it
from straying from its straight, or rather arched, path across the heavens? There must be other factors. The
gravitational pull of the Earth is one that comes to mind. The molecular and subatomic bonds that keep the rocks,
dust, and the moon as a whole intact is another. All these things count as other kinds of interactions, some between
particles and others between macroscopic material bodies, in which consciousness plays no part. But by this
account, the interpretation under consideration ends up sounding more like one of the decoherence interpretations,

20 of 23 4/2/2010 3:46 PM
MM-Theory - Quantum Mechanics http://www.mm-theory.com/qm/qm.htm

and so, in essence, we leave the consciousness-causes-collapse interpretation when we bring in other factors
besides consciousness. At this point, then, we will postpone any further comment until we get to these other
interpretations.

What about these Many Worlds and Many Minds interpretations? They suffer the same problems as the
consciousness-causes-collapse interpretation. The problem, namely, is that we typically experience the world as
though classical mechanics were the best description of it. Why don't we ever see the moon in places where it
shouldn't be according to the laws of planetary motion? The Many Worlds and Many Minds interpretations say that
we should, and for exactly the same reasons as the consciousness-causes-collapse interpretation. If every time we
make an observation of some physical object, whether it's a particle or something as big as the moon, we split the
universe into as many offspring as their are instances of that object making up its superposition state, then which
instance we get paired up with should be just as likely, and seem just as random, as in the consciousness-causes-
collapse case. The same applies to the Many Minds interpretation, except there would be no universe splitting - just
instances of objects being paired with instances of minds. So in a few universes, or a few pairings of mind instances
to object instances, we will see the moon in very awkward positions in the sky - awkward by the standards of
classical mechanics, that is. Is it just coincidence, then, that we never see this, or anything to do with other
observable objects that would be just as awkward? Are we just lucky to be perpetually allocated to universes, or
mind-object pairings, that seem to play out in accordance with the conventional laws that seem so natural to our
everyday world and intuitively predictable? Well, we could say that any awkward turn of events, such as (say) the
moon being around Jupiter, is, although possible, extremely unlikely, and it would only happen once in a trillion
universe splittings (or mind-object pairings) that we would witness such a anomalous spectacle. But this is no
different than the defense we gave for the consciousness-causes-collapse interpretation, which we thereafter refuted.
We refuted it on the grounds that the entire reason these awkward outcomes are so improbable is because something
has already restricted the wavefunction - that is, the possible regions in space where physical objects are most likely
to be found - to those possibilities that are typical only of classical mechanics. Otherwise, everything should travel
as a wave - planets, particles, even people. In other words, the reason we'll never see the moon hanging around
Jupiter is because forces like the Earth's gravity or the atomic and subatomic bonds holding the moon together and
preventing any part of it from veering away from any other part do indeed cause the wavefunction to collapse. It
collapses (or decoheres) independently of our consciousness, any universe splitting, or any mind-object pairing. The
collapse/decoherence must be independent of these because the restricted possibilities that such
collapse/decoherence results in are what we're given to begin with - that is, the probability of always finding the
same electron close to its nucleus, for example, as opposed to dispersing off into space as a wave would, is much
greater than it would be if it did disperse as a wave, and this greater probability is given before we have a chance to
observe any outcome, and therefore trigger a universal split or a new mind-object pairing. Thus, there has to be more
to collapse (or decoherence) than simply what universe, or what pairing, our minds get allocated to.

Let's now look at the Bohm Interpretation. Although the Bohm Interpretation does away with superposition, it leaves
something to be desired in its claim that it does the same with randomness. It is said that the pilot wave guides the
particle in the properties it manifests (such as position, momentum, spin, etc.), but when we inquire further into how
the wave does this, we find this account to be vacuous. To claim that a pilot wave "guides" the particle without
elaborating on how it does so is no more enlightening than the claim that "some mechanism" determines the outcomes
of any quantum mechanical experiment. In other words, the Bohm Interpretation adds nothing informative to quantum
mechanics - at least, not where randomness is concerned.

Furthermore, Bell's Theorem put a damper on the Bohm Interpretation when it proved that nothing local could
account for the randomness of quantum phenomena. If there were any mechanism determining the states resulting
from collapse, it would have to be non-local. Therefore, if proponents of the Bohm Interpretation wanted to carry on
with their view, they would have to forgo the image of a local pilot wave, and opt for a non-local one. This, in fact,
is what happened. Many Bohm proponents adapted their view such that the pilot wave became a "universal
wavefunction". It didn't exist local to the particles it determined, but remained in the ubiquitous background. In other
words, it was as if the universe itself was guiding all particles. But this is even worse than the local version of the
theory, for not only is no light shed on the means by which this wave guides all particles, but one can't even
conceptualize it as a wave anymore. It remains a wave in name only - the "universal wavefunction" - but what kind
of mechanism this obscure term refers to is anyone's guess. The fact is, anyone can submit the proposal that
"something" determines the outcomes of quantum phenomena, local or otherwise, but doing so would miss the point -
namely, to contribute something informative to the questions surrounding quantum mechanics.

So let's move onto single-world decoherence theories. These are actually not bad interpretations. Their greatest
feature, in my opinion, is their simplicity. By granting that any particle interaction can decohere the wavefunction, it
doesn't complicate the matter by positing extreme forms of superposition, like the ontologically oriented version of
the Copenhagen Interpretation. Likewise, it doesn't chop the universe up into several copies, like the Many Worlds
interpretation, and this adds to its simplistic character. Superposition remains unaccounted for, however, as does
randomness, and we will have to address these if we were to adopt an interpretation like this. Another bonus of
these interpretations is that it allows us to imagine the universe as persisting in much the same states as it would
under the classical view. These states are not exactly as the classical view would have them, but they are a
convenient approximation. There is so much going on in the universe, so many physical interactions. Even distant
stars from neighboring galaxies have small effects on each other by way of gravity, solar energy, and perhaps other

21 of 23 4/2/2010 3:46 PM
MM-Theory - Quantum Mechanics http://www.mm-theory.com/qm/qm.htm

mechanisms. Material objects, even ones as rarified as hydrogen gas, are held together by their atomic bonds, which
are just interactions between electrons and protons. All these things count as physical interactions, and according to
single-world decoherence interpretations, this means that these interactions are constantly reinforcing, as much as
possible, the classical states that our common sense notions are in the habit of holding onto. Therefore, single-world
decoherence interpretations don't veer much from these common sense notions, and that adds to its parsimony - a real
advantage when they're up against competing interpretations.

The Orch OR model is an extended type of single-world decoherence theory. If any particle interaction decoheres
the wavefunction, then the brain - a highly condensed and chemically active organ - should have plenty of
decoherence going on inside itself. This works well with the Orch OR model since it permits conscious decision
making to be associated with these decoherence events. Therefore, if we deemed single-world decoherence
interpretations impressive for their simplicity, then as far as they take us, we should deem the Orch OR model in a
similar vein. But the Orch OR model takes us beyond decoherence interpretations - into a theory of consciousness
and free-will. This comes as a blessing and a curse. It is a blessing in the sense that, as we pointed out above, it
Quantum accounts for superposition and randomness in the same stroke. It is a curse, however, in the sense that it is a
Consciousness
competing theory to ours. But rather than attack the Orch OR model head-on, we will find a way to reconcile it with
our theory. In fact, I intend to show, in Determinism and Free-Will how the two theories - ours and the Orch OR
model (or rather, the theory of "Quantum Consciousness" as presented by Stuart Hameroff) - can actually
complement each other.

Needless to say, we will be adopting the single-world decoherence model as our official stance on quantum
mechanics. Although we have yet to deal with its shortcomings - the proper conceptualization of superposition and
the randomness of decoherence - the proper place to address these issues is in Determinism and Free-Will. Its
simplicity is a very strong advantage, making for a very elegant interpretation. The single-world decoherence
interpretation is a more recent idea, and so at the time of its advent, quantum physicists were quite used to the
concepts of superposition and randomness. These concepts did little, therefore, to take away from its elegance, and
so it is not surprising that, aside from the Copenhagen Interpretation, the idea of single-world decoherence is gaining
in popularity. It is safe to say that, today, it is fairly mainstream. This is another good reason to opt for this
interpretation - that is, it is always safe to go with a model that is held in high regards by a great many professionals
in the field.

But what about the merits of the other interpretations? As we've seen, the consciousness-causes-collapse and the
Bohm Interpretation didn't fare so well in our assessment, and the Many Worlds and Many Minds Interpretations are
plagued with the same shortcomings as the first of the former. It might be noteworthy to point out, however, that these
shortcomings, or at least a few of them, are only problematic insofar as the interpretation that is plagued by them is
judged on its own grounds. That is to say, just as we promised at the beginning of this section, we judged each
interpretation, not only on its compatibility with MM-Theory, but on its own grounds as a scientific (or as close to
scientific as they get) account of quantum mechanics. We could have, for example, condoned the Bohm
Interpretation. It nicely does away with superposition, and where randomness is concerned, although it hardly
satisfies to a scientifically/materialistically minded person, it doesn't bother the more metaphysically oriented
thinker quite as much. In other words, whereas a non-local account like the "universal wavefunction" is too much
The Universal new-age mumbo-jumbo to a keen scientist, it doesn't conflict in the least with a more metaphysical view like
Mind MM-Theory. We would simply posit that the "universal wavefunction" is a sort of algorithm (mimicking wave
mechanics) that the Universal Mind carries out when deciding how to move particles. Technically, even the Many
Worlds Interpretation doesn't conflict with MM-Theory. It too does away with randomness, and although
superposition and the splitting of universes is something that MM-Theory would have to grapple with, it is not
logically inconsistent with it. This is what the principle of the Unassailability of Science is all about. It tells us that
no matter what the discoveries of science, MM-Theory claims that those discoveries are physical representations of
The experiences being had elsewhere in the universe. Superposition is no exception to this, and we will do this principle
Unassailability justice by giving an account of superposition in the paper Determinism and Free-Will. The multiverse, unfortunately,
of Science will not be given an equally decisive account, but that's no reason to suppose none could be given. But we have
settled on an interpretation that doesn't work well with MM-Theory. Single-world decoherence interpretations leave
a lot on our plate, for MM-Theory has yet to account for superposition and randomness, and although the principle of
the Unassailability of Science promises us that superposition can be accounted for, randomness is an exception to
this. It actually does conflict with our theory. We will deal with this in Determinism and Free-Will, and the fact that
we are accepting this burden shows that we have not taken the easy route - we have accommodated the scientific
community more than our own theory.

Determinism and Free-Will

Introduction The Basics Energy Quanta Wave/Particle Duality


Entanglement, Cats, and Other
Uncertainty and Randomness Interpretations Evaluations
Paradoxes

TOP

22 of 23 4/2/2010 3:46 PM
MM-Theory - Quantum Mechanics http://www.mm-theory.com/qm/qm.htm

Bookmark this page now!

© Copywrite 2008, 2009 Gibran Shah.

23 of 23 4/2/2010 3:46 PM
Appendix
Yet Another Model
The diagram on the left shows the currently held
model of atomic orbitals. This means that the model
Bohr forwarded was, yet again, replaced by a better
one. The key difference is in the shape the orbitals
take. In the Bohr model, the shapes are similar to
those of the Rutherford model - namely, as circular
or elliptical paths surrounding the nucleus - but in
today's model, these shapes are noticeably different.
The basic orbitals (top/right), which correspond to
the lowest energy levels, are not so different from
the Bohr model except that they take a spherical shape rather than a circular or
elliptical one. Other orbitals higher up take on shapes reminiscent of balloons
(top/left and bottom/left), toruses (top/left), or kidney beans (bottom/right).

Another major difference, which will be explained as we get further into


quantum mechanics, is that the electrons that occupy these orbitals are not
literally orbiting the nucleus - at least, not in the conventional sense. Rather, they
form what some have called an "electron cloud". What this phrase means to
convey is that the electrons don't take a definite position within these orbitals -
rather they take a "fuzzy" position, and fill out these orbitals somewhat
analogously to a cloud of gas filling a chamber. Depending on which orbital the
electron is in, the shape this "cloud" takes conforms to the shape of the orbital.
This may sound confusing to the reader, and at this point, the reader should feel
confused. To really grasp what it means for an electron to take on the form of a
"cloud", or for its position not to be definite, we need a more in-depth
understanding of the nature of quantum mechanics. Hopefully, by the end of this
paper, the reader will have such an understanding, and therefore, it might be
worth his/her while to return to this sidenote at that point.

Buckyballs!
The largest thing to ever exhibit the interference pattern in the double-slit
experiment is the buckyball. As seen below, a buckyball is a somewhat large
molecule made mostly of carbon atoms.

i
Superposition Without Randomness
Physicists will tell us that a few particle properties, like mass or charge, never go
into superposition states. But technically, this cannot be known. What is known,
at the very least, is that if they did go into superposition, the states they collapse
into when measured are not random. That is, it's actually quite possible that
although an electron's mass and charge, when measured, are always 9.11×10−31kg
and -1.6×10-19 coulombs respectively, they nevertheless go into superposition at
all other times. When they collapse, they would simply acquire the same value or
state every time. We have to distinguish between superposition and randomness -
they are not the same thing. Although it makes good intuitive sense that
superposition ultimately leads to random collapse, this connection is not
necessary in a strictly logical sense.

The Proper Account of Quantum Tunneling


The animation to the right is actually a poor depiction of how quantum tunneling
works. The proper account of quantum tunneling has very little to do with
"borrowing" energy. The proper account is as follows. When a particle is
confined to a very small region, enclosed there by a nearly impenetrable barrier,
although its position has been narrowed down to that small region, it still exists
in a state of superposition to some degree. The wave that constitutes this
superposition state is capable of spanning just beyond the barrier, as shown in
figure 5, and this means that there is a minute chance that the particle's position,
when measured, will turn up beyond the barrier. Simply put, the particle can end
up on the other side of the barrier because its position is not fully determined -
not because it burst through it.

ii
Figure 5: Quantum Tunneling

One might ask, therefore, why the need exists to bring in additional accounts,
such as the borrowing of energy, for a full explanation of quantum tunneling. The
answer is that some physicists feel that, although the fundamental concepts of
classical mechanics can be abandoned in light of the discoveries of quantum
mechanics, this is not true for all classical concepts. That a particle can exist
somewhere within a confined space enclosed by a barrier at one point in time and
then somewhere outside that space at a later point in time still violates certain
principles of classical mechanics - principles that one cannot justify abandoning
in virtue of quantum mechanics. Namely, it violates certain conservation laws (of
energy and momentum), and quantum mechanics, despite its overturning of
classical mechanics, has not ruled these laws out. Therefore, we still need to
account for how a particle can go from one side of a barrier to the other without
sufficient energy to do so. But the crucial question is just whether extra energy is
even needed for quantum tunneling to occur. The indeterminacy of a particle's
position is seen, by some, as just an alternate account to the "borrowing" one, and
doesn't suffer the insufficiency that others feel the "borrowing" account satisfies.
As noted many times already in this paper, these are matters of speculation, and
so one is free to choose either interpretation without clashing with scientific fact.

Particles Don’t Touch


Technically, particles never come in contact with each other - not even in the
classical paradigm of physics. Instead, they affect each other with their charges.
The fact remains, however, that how these charges affect the particles they
subjugate depends greatly on where the particles are relative to each other. The
closer they are, the stronger the force of the charge. The more to one side one is
relative to the other, the more the other is going to be pushed in the other
direction or pulled in the same direction. So if their positions are undetermined, it
is hard to fathom how their charges can affect each other in a definite manner.
The more undetermined their position, such as in the double-slit experiment, the
more unfathomable it is, such as what determined that the particle was going to
hit the screen at the precise location it did.

iii

S-ar putea să vă placă și