Sunteți pe pagina 1din 44

As more evidence comes out showing that the Large Hadron Collider has located the Higgs

boson (or some general Higgs-like particle), the next few years will focus largely on sifting
through this evidence and refining the Standard Model of Particle Physics. They will conduct
measurements from millions of particle collisions in the effort to more precisely define the
physical properties of these Higgs-like particles (and the related Higgs field, which is the real
object of interest).

But we humans are always looking toward the future, so a natural question is: What next?

Fortunately, the next thing is something that physics has been investigating since the 1970's,
developed less than a decade after Peter Higgs came up with his Higgs mechanism. That
thing?

Supersymmetry

Under this theory, all of the Standard Model particles (listed above in the graphic) would
contain a counterpart. For each of the fermions (the leptons and quarks) there would be a
counterpart that's a boson, and for each of the bosons (the force carriers) there would be a
fermion counterpart. These "superpartners" as they are called would presumably (since we
don't run into them in the normal course of existence) be unstable and rapidly decay, but it is
certainly possible that the Large Hadron Collider could create them.

In fact, it's even possible that the refinements made based on our growing understanding of
the Higgs boson & Higgs field parameters will tell us more precisely what energy levels to
check out for some of these particles.

So don't worry, true believers ... there's still plenty of things for particle physicists to look for.
That giant particle accelerator out in Switzerland isn't going to run out of work anytime soon.

One of the most visually impressive recent discoveries in physics is the phenomenon of
quantum levitation, in which a superconductor becomes suspended within a magnetic field.
Quantum locking goes even a step beyond this. This isn't the same as just magnetic repulsion,
though, because the superconductor itself doesn't have any electrical charge. Instead, it
repulses the magnetic field around it, but if the superconductor is thin enough, then some of
the field pops through the material due to the quantum Meissner effect. The result is that the
magnetic field actually "locks" the superconductor in place relative to the source of the
magnetic field.

Physicist Boaz Almog from Tel Aviv University recently gave a TED talk where he
demonstrates the phenomenon. It's available on our list of TED physics videos. (Though it
may not be quite as cool as when Stephen Colbert quantum levitated ice cream.)

The Higgs boson is getting a lot of press since the July 4 CERN announcement that it may
have been detected by the Large Hadron Collider, but the boson is only part of the story. See,
the real work isn't done by the boson itself, but by the Higgs field, which theoretical physicist
Peter Higgs proposed in 1964 as a means of explaining how the Standard Model of quantum
physics could explain the mass shown by fundamental particles. The Higgs boson is just a
physical manifestation of that field, created when you shove enough energy into it. (In
quantum physics, fields and particles are viewed as different ways of representing the same
basic physical entity. This is, of course, a bit of an over-simplification, but it's close enough.)
Here is how theoretical physicist Brian Greene explained the Higgs field on the Charlie Rose
show on PBS:

Mass is the resistance an object offers to having its speed changed. You take a baseball. When
you throw it, your arm feels resistance. A shotput, you feel that resistance. The same way for
particles. Where does the resistance come from? And the theory was put forward that perhaps
space was filled with an invisible "stuff," an invisible molasses-like "stuff," and when the
particles try to move through the molasses, they feel a resistance, a stickiness. It's that
stickiness which is where their mass comes from.... That creates the mass....

... it's an elusive invisible stuff. You don't see it. You have to find some way to access it. And
the proposal, which now seems to bear fruit, is if you slam protons together, other particles,
at very, very high speeds, which is what happens at the Large Hadron Collider... you slam the
particles together at very high speeds, you can sometimes jiggle the molasses and sometimes
flick out a little speck of the molasses, which would be a Higgs particle. So people have
looked for that little speck of a particle and now it looks like it's been found.

Greene isn't the only one out there talking about the Higgs boson, of course. There's also this
nice post from Jim Baggott, over at the Huffington Post, which helps to shed some light on
why physicists are so excited, and I had an earlier post where I collected together quite a lot of
the early explanations that spread across the web in the wake of the LHC announcement.

n 2007, physicist and author Frank Close compiled together a slender volume with an
ambitious goal: to succinctly explain the scientific thinking behind the concept of
"nothingness," the vacuum of non-existence, the very void in which nothing exists. The
problem is that, simple as this concept seems at first glance, it gets incredibly complex very
quickly in scientific terms, especially when you try to incorporate the findings of relativity
and quantum physics.

The result of Close's efforts is The Void, a book that gains even more relevance for those
trying to understand how the Higgs boson fits into the universe.

Games don't have to have a scientific theme to have scientific lessons, as I recently discovered
when I played the fantasy-themed board game Magician's Kitchen. In this game, you play a
bunch of sorcerer's apprentices who are scurrying around a kitchen, trying to prepare potions
and light the fireplace first. Science plays into the game because the board is designed with a
series of hidden magnets that cause your magnetized figures to tumble, dropping the
ingredients before they can complete their task.

It's a fun game for young children and certainly something that opens up an avenue for
discussing some key scientific concepts ... but it's not the most gripping game for grown-ups
and even older children. Check out more in our full review of Magician's Kitchen.

Question: What Is the Anthropic Principle?


Answer:

The anthropic principle is the belief that, if we take human life as a given condition of the
universe, scientists may use this as the starting point to derive expected properties of the
universe as being consistent with creating human life. It is a principle which has an important
role in cosmology, specifically in trying to deal with the apparent fine-tuning of the universe.
Origin of the Anthropic Principle

The phrase "anthropic principle" was first proposed in 1973 by Australian physicist Brandon
Carter. He proposed this on the 500th anniversary of the birth of Nicolaus Copernicus, as a
contrast to the Copernican principle that is viewed as having demoted humanity from any sort
of privileged position within the universe.

Now, it's not that Carter thought humans had a central position in the universe. The
Copernican principle was still basically intact. (In this way, the term "anthropic," which
means "relating to mankind or the period of man's existence," is somewhat unfortunate, as one
of the quotes below indicates.) Instead, what Carter had in mind was merely that the fact of
human life is one piece of evidence which cannot, in and of itself, be completely discounted.
As he said, "Although our situation is not necessarily central, it is inevitably privileged to
some extent." By doing this, Carter really called into question an unfounded consequence of
the Copernican principle.

Prior to Copernicus, the standard viewpoint was that the Earth was a special place, obeying
fundamentally different physical laws than all the rest of the universe - the heavens, the stars,
the other planets, etc. With the decision that the Earth was not fundamentally different, it was
very natural to assume the opposite: All regions of the universe are identical.

We could, of course, imagine a lot of universes that have physical properties that don't allow
for human existence. For example, perhaps the universe could have formed so that the
electromagnetic repulsion was stronger than the attraction of the strong nuclear interaction? In
this case, protons would push each other apart instead of bonding together into an atomic
nucleus. Atoms, as we know them, would never form ... and thus no life! (At least as we know
it.)

How can science explain that our universe isn't like this? Well, according to Carter, the very
fact that we can ask the question means that we obviously cannot be in this universe ... or any
other universe that makes it impossible for us to exist. Those other universes could have
formed, but we wouldn't be there to ask the question.

Variants of the Anthropic Principle

Carter presented two variants of the anthropic principle, which have been refined and
modified much over the years. The wording of the two principles below is my own, but I
think captures the key elements of the main formulations:

Weak Anthropic Principle (WAP): Observed scientific values must be able to allow
there to exist at least one region of the universe that has physical properties allowing
humans to exist, and we exist within that region.
Strong Anthropic Principle (WAP): The universe must have properties that allow
life to exist within it at some point.

The Strong Anthropic Principle is highly controversial. In some ways, since we do exist, this
becomes nothing more than a truism. However, in their controversial 1986 book The
Cosmological Anthropic Principle, physicists John Barrow and Frank Tipler claim that the
"must" isn't just a fact based on observation in our universe, but rather a fundamental
requirement for any universe to exist. They base this controversial argument largely on
quantum physics and the Participatory Anthropic Principle (PAP) proposed by physicist John
Archibald Wheeler.

A Controversial Interlude - Final Anthropic Principle

If you think that they couldn't get more controversial than this, Barrow and Tipler go much
further than Carter (or even Wheeler), making a claim which holds very little credibility in the
scientific community as a fundamental condition of the universe:

Final Anthropic Principle (FAP): Intelligent information-processing must come into


existence in the Universe, and, once it comes into existence, it will never die out.

There is really no scientific justification for believing that the Final Anthropic Principle holds
any scientific significance. Most believe it is little more of a theological claim dressed up in
vaguely scientific clothing. Still, as an "intelligent information-processing" species, I suppose
it might not hurt to keep our fingers crossed on this one ... at least until we develop intelligent
machines, and then I suppose even the FAP might allow for a robot apocalypse.

Justifying the Anthropic Principle

As stated above, the weak and strong versions of the anthropic principle are, in some sense,
really truisms about our position in the universe. Since we know that we exist, we can make
certain specific claims about the universe (or at least our region of the universe) based upon
that knowledge. I think the following quote well sums up the justification for this stance:

"Obviously, when the beings on a planet that supports life examine the world around them,
they are bound to find that their environment satisfies the conditions they require to exist.

It is possible to turn that last statement into a scientific principle: Our very existence imposes
rules determining from where and at what time it is possible for us to observe the universe.
That is, the fact of our being restricts the characteristics of the kind of environment in which
we find ourselves. That principle is called the weak anthropic principle.... A better term than
"anthropic principle" would have been "selection principle," because the principle refers to
how our own knowledge of our existence imposes rules that select, out of all the possible
environment, only those environments with the characteristics that allow life." -- Stephen
Hawking & Leonard Mlodinow, The Grand Design

The Anthropic Principle in Action

The key role of the anthropic principle in cosmology is in helping to provide an explanation
for why our universe has the properties it does. It used to be that cosmologists really believed
they would discover some sort of fundamental property that set the unique values we observe
in our universe ... but this has not happened. Instead, it turns out that there are a variety of
values in the universe that seem to require a very narrow, specific range for our universe to
function the way it does. This has become known as the fine-tuning problem, in that it is a
problem to explain how these values are so finely-tuned for human life.

Carter's anthropic principle allows for a wide range of theoretically possible universes, each
containing different physical properties, and ours belongs to the (relatively) small set of them
that would allow for human life. This is the fundamental reason that physicists believe there
are probably multiple universes. (See our article: "Why Are There Multiple Universes?")

This reasoning has become very popular among not only cosmologists, but also the physicists
involved in string theory. Physicists have found that there are so many possible variants of
string theory (perhaps as many as 10500, which really boggles the mind ... even the minds of
string theorists!) that some, notably Leonard Susskind, have begun to adopt the viewpoint that
there is a vast string theory landscape, which leads to multiple universes and anthropic
reasoning should be applied in evaluating scientific theories related to our place in this
landscape.

One of the best examples of anthropic reasoning came when Stephen Weinberg used it to
predict the expected value of the cosmological constant and got a result that predicted a small
but positive value, which didn't fit with the expectations of the day. Nearly a decade later,
when physicists discovered the expansion of the universe was accelerating, Weinberg realized
that his earlier anthropic reasoning had been spot on:

"... Shortly after the discovery of our accelerating universe, physicist Stephen Weinberg
proposed, based on an argument he had developed more than a decade earlier--before the
discovery of dark energy--that ... perhaps the value of the cosmological constant that we
measure today were somehow "anthropically" selected. That is, if somehow there were many
universes, and in each universe the value of the energy of empty space took a randomly
chosen value based on some probability distribution among all possible energies, then only in
those universes in which the value is not that different from what we measure would life as we
know it be able to evolve.... Put another way, it is not too surprising to find that we live in a
universe in which we can live!" -- Lawrence M. Krauss, A Universe From Nothing

Criticisms of the Anthropic Principle

There's really no shortage of critics of the anthropic principle. In two very popular critiques of
string theory, Lee Smolin's The Trouble With Physics and Peter Woit's Not Even Wrong, the
anthropic principle is cited as one of the major points of contention.

The critics do make a valid point that the anthropic principle is something of a dodge, because
it reframes the question that science normally asks. Instead of looking for specific values and
the reason why those values are what they are, it instead allows for an entire range of values
as long as they're consistent with an already-known end result. There is something
fundamentally unsettling about this approach.

If, in some cataclysm, all of scientific knowledge were to be destroyed, and only one sentence
passed on to the next generations of creatures, what statement would contain the most
information in the fewest words? I believe it is the atomic hypothesis (or the atomic fact, or
whatever you wish to call it) thatall things are made of atoms--little particles that move
around in perpetual motion, attracting each other when they are a little distance apart, but
repelling upon being squeezed into one another. In that one sentence, you will see, there is an
enormous amount of information about the world, if just a little imagination and thinking are
applied...

Feynman was right to indicate that this is a powerful statement, but he also knew that it wasn't
the end of the story, either. The atom is not, as the ancient Greeks believed, the smallest,
indivisible constituent of matter. Feynman himself helped in fact to crack open the atom as
part of the Manhattan Project, and then later was in a tight race with Murray Gell-Mann to
figure out what made up the protons and neutrons in the atomic nucleus ... an effort that
ultimately ended in quantum chromodynamics.

Today, it can be a bit dizzying trying to sort out all of the various particles that physics have
found when they began banging tiny chunks of matter together in particle accelerators. But if
you want a good starting point, may I humbly recommend our very own Particle Physics
Fundamentals, which lays out the major categories of particles and how they relate to each
other.

Last week, both major presidential candidates responded to a series of questions put together
by the non-profit Science Debate organization. I've used this as my starting point for a piece
on Science Issues in the 2012 Presidential Election, which will evolve a bit as these issues
continue being addressed by the candidates (if that happens).

For some excellent perspective on these issues and the responses, there's a YouTube video
where Dr. Kiki Sanford discusses the results with Shawn Lawrence Otto, author of Fool Me
Twice: Fighting the Assault on Science in America and co-founder of Science Debate. It
runs about 47 minutes, but the analysis is quite good and I think gives both candidates their
due for the good responses.

My perspective, as someone who isn't a policy wonk or anything but does have a mild interest
in watching this political back-and-forth, is that Romney is the one who really needs to win
people over among the pro-science community. Obama has done a lot of outreach to support
(or at least to give lip-service to supporting) science. This wasn't particularly hard in
comparison to the previous administration's perceived position with science. Rightly or
wrongly, Republicans are widely viewed as being more anti-science than Democrats ... often
even by other Republicans! I actually think that Romney's responses were fairly good and he
certainly seems to have a better handle on the science than many other Republican candidates
have demonstrated.

Many of the science-based issues facing a president on the national and international level
have to do with energy policy. Two excellent books that I'd like both candidates to become
familiar with are Richard A. Muller's Energy for Future Presidents: The Science Behind the
Headlines and Maggie Koerth-Baker's Before the Lights Go Out: Conquering the Energy
Crisis Before It Conquers Us. (Reviews of these books will come shortly. Keep your eyes
open!)

Science and Politics:

There are a wide range of issues at stake in any political election. Whether you're talking
about healthcare, environmental concerns, educational policy, technological infrastructure,
business innovation, or any of a host of other topics, many of the solutions required involve
understanding the scientific issues at stake.

For this reason, in 2008, a group came together to lobby for a Science Debate between John
McCain and Barack Obama. The full debate never came about, but the movement gained
steam and resumed their work to try for a similar Science Debate between Barack Obama and
Mitt Romney in the 2012 election. (For more on this, see Science Debate co-founder Shawn
Lawrence Otto's book Fool Me Twice: Fighting the Assault on Science in America. Another
good book on the subject is Unscientific America: How Scientific Illiteracy Threatens Our
Future.)

I would personally be surprised if there will be an actual debate focused on scientific issues,
but the candidates have both answered the call by responding to a series of questions from
Science Debate. The full responses of the candidates can be found on the Scientific American
website, and editors will evaluate the scientific validity of their responses in the November
issue of the magazine (which is slated to come out in mid-October). For an initial review of
the results, there's a good YouTube video of Shaun Lawrence Otto with Dr. Kiki Sanford
covering the responses.

The Questions:

Scientific American, in conjunction with the folks from Science Debate, asked both candidates
Barack Obama and Mitt Romney a series of 14 questions about urgent science-related
policies. The questions focused on the following areas:

1. Innovation and the Economy


2. Climate Change
3. Research and the Future
4. Pandemics and Biosecurity
5. Education
6. Energy
7. Food
8. Fresh Water
9. The Internet
10. Ocean Health
11. Science in Public Policy
12. Space
13. Critical Natural Resources
14. Vaccination and Public Health

In the meantime, however, I thought it would be prudent to highlight some of the major issues
of the campaign and give my own thoughts on some of their responses. I'll focus mostly on
their non-biology comments, since I really don't have much expertise to be chiming in on
biosecurity.

Innovation:

In response to a question on innovation, Mitt Romney certainly receives the credit for details.
In a multi-part response, he ties innovation quite firmly into his economic agenda. He is, after
all, campaigning on the strength of his background supporting the growth of businesses, so
this seems to be an element that he would have a strong background in. My one concern is
whether he would really see a role for the federal government in funding basic research, as
Obama clearly does. Though he chides Obama's policies as a failure, Romney does say that
"we must never forget that the United States has moved forward in astonishing ways thanks to
national investment in basic research and advanced technology. As president, I will focus
government resources on research programs that advance the development of knowledge, and
on technologies with widespread application and potential to serve as the foundation for
private sector innovation and commercialization."

Both candidates also agree that innovation requires a strong educational system.

President Obama: "To prepare American children for a future in which they can be
the highly skilled American workers and innovators of tomorrow, I have set the goal
of preparing 100,000 science and math teachers over the next decade. These teachers
will meet the urgent need to train one million additional science, technology,
engineering and math (STEM) graduates over the next decade."
Mitt Romney: " I will take the unprecedented step of tying federal funds directly to
dramatic reforms that expand parental choice, invest in innovation, and reward
teachers for their results instead of their tenure."

While both of these plans sound good on the surface, neither is flawless. Where will the
100,000 teachers come from? New teachers take time to become good at teaching, and even
then the lag time before they'll enter the workforce will be several years, so how can the
success of such a plan really be evaluated? On the Romney front, the question is whether a
mandate to "expand parental choice" really has a strong impact on student performance. This
is a claim that contains a lot of very mixed evidence in the research.

Climate Change:

Both candidates did well, I think, on the climate change question. Obama's strong positions on
this are fairly well known, but there were a few tidbits of note in the candidates responses.
Some of the highlights:

President Obama: "Since I took office, the U.S. is importing an average of 3 million
fewer barrels of oil every day, and our dependence on foreign oil is at a 20-year low."
Mitt Romney: "... my best assessment of the data is that the world is getting warmer,
that human activity contributes to that warming, and that policymakers should
therefore consider the risk of negative consequences."

Research Funding:

President Obama: "... under the Recovery Act, my administration enacted the
largest research and development increase in our nations history. Through the
Recovery Act, my Administration committed over $100 billion to support
groundbreaking innovation with investments in energy, basic research, education
and training, advanced vehicle technology, health IT and health research, high
speed rail, smart grid, and information technology."
Mitt Romney: "I am a strong supporter of federally funded research, and continued
funding would be a top priority in my budget. The answer to spending constraints is
not to cut back on crucial investments in Americas future, but rather to spend money
more wisely."

One point of note is that in this response, President Obama makes reference to making
permanent the R&D tax credit, which Romney said he wanted to make permanent in the
"Innovation" response (and is noted among Romney's tax proposals elsewhere). So, it would
seem as if both candidates want to make this R&D tax credit permanent, although one
assumes that they want to offset that with different economic adjustments.

With all the excitement about the Higgs boson, it seems like a good time to think about the
other bosons that we know about. The Standard Model of particle physics contains a total of
four bosons (not counting the theoretical Higgs). These bosons are considered force carriers,
because they communicate the three fundamental forces of physics that are explained by
quantum physics. The bosons associated with these three forces are:

Photon
Gluon
W Boson
Z Boson

There are four bosons because the W boson and Z boson work together to mediate the weak
nuclear force.

In addition to the above bosons, theories of quantum gravity also propose another type of
boson, the graviton, which would mediate the gravitational force. To date, however, this
boson has not been confirmed.

Peter Higgs - Biographical Profile

General Information:

Birth Date: May 29, 1929


Birthplace: Wallsend, North Tyneside, England

Education: King's College London, 1950

Awards and Recognition:

1983 - Named fellow of the Royal Society


1984 - Awarded the Rutherford Medal and Prize
1991 - Named fellow of the Institute of Physics
1997 - Dirac Medal and Prize from the Institute of Physics
1997 - High Energy and Particle Physics Prize by the European Physical Society
2004 - Wolf Prize in Physics
2008 - Honorary fellowship from Swansea University
2010 - J. J. Sakurai Prize for Theoretical Particle Physics
2011 - The Edinburgh Award

The Higgs Mechanism:

In the 1960's, theoretical physics was having trouble explaining why certain fundamental
particles had any mass at all. Building on hints and approaches developed by others, Peter
Higgs put forward a theory in 1964 to explain the way that the fundamental laws of physics
could work as expected but still result in the observed mass of fundamental particles. He
suggested that there was a quantum field permeating all of space and that mass resulted from
disturbances in this field, which is called the "Higgs field."

One of the results of this theory (sometimes also called the "Higgs mechanism") was to
propose a new fundamental particle of the universe, the Higgs boson. Unfortunately for
efforts to detect the Higgs boson experimentally, the theory doesn't predict a precise mass
value (or even a clear mass range) for the Higgs boson. This has forced experimentalists to
begin searching through the possible mass ranges and narrow down the possibilities.

Higgs Boson Search:

In the years since its development, the Higgs mechanism has been incorporated as an element
of the Standard Model of particle physics, but the Higgs boson has been the most elusive
element of the Standard Model to detect experimentally.

The search for the Higgs boson is a major focus of the Large Hadron Collider in Geneva,
Switzerland. On July 4, 2012, CERN made an announcement indicating that evidence has
been found which strongly suggests they may have located a particle that is consistent with
the what is expected to be true about the Higgs boson. Further tests are needed to confirm the
interpretation of the data.

Personal Views:

Peter Higgs is an atheist and holds strong political views. He declined accepting an award for his work
in Jerusalem, because he felt that it would imply support of Israeli policies in Palestine which he
opposed. He supports environmental causes and has opposed nuclear weapon development, though
he has broken with some official organizations that support these causes because he does not agree
with the full extent of their activism. For example, he supports the use of genetically modified
organisms and safe nuclear power.

Researchers at the Large Hadron Collider announced yesterday that they may well have found
the long-sought after Higgs boson, sometimes called the "God particle," which is the final
missing part of the Standard Model of particle physics. Theoretical physicist Peter Higgs
predicted the existence of the particle back in the 1960's, but he was so far ahead of the
technology that it took nearly half a century to actually get the first glimpse of the thing ...
except, of course, for the glimpses that are all around us, if Higgs is right.

Peter Higgs awaits word from CERN on the potential discovery of the Higgs boson
Source: CERN
Because if he is right, then evidence of the Higgs boson is everywhere. See, the reason Peter
Higgs needed to propose his theory was that the physical theories he had to work with at the
time had one major flaw: they didn't explain why there was any stuff in the universe.

That's right. The very best scientific explanations that physicists could come up with had a
gaping hole in the middle of them. They depicted a universe that was so elegant and finely
tuned that ... it shouldn't actually have anything in it. For example, the gauge bosons that
mediate the weak nuclear force (called the W boson and Z boson) should, according to theory,
have absolutely no mass. But they do have mass! So Peter Higgs set out to try to explain why
and how matter itself could exist, in a way that was fully consistent with all of the known laws
of physics.

The result was to propose a field in empty space, a field that permeates all of space, called the
Higgs field, which has the right properties needed to give mass to these particles ... and, in
turn, to cause the mass of all the rest of the universe, as well.

And, in quantum physics, fields can also be expressed as particles (one of the many weird
things about quantum physics), so the resulting particle was called the Higgs boson. (It was
called a boson because it had a spin of 0. If it had a spin of one-half it would have been a
Higgs fermion, but then it wouldn't have been able to do what it needed to do!)

As with most things in physics, that's an over-simplification of the story. It sounds like Higgs
came up with the whole idea out of nowhere, and he didn't. The ideas were built on the work
of others and many others came upon similar ideas at almost exactly the same time, so even
calling the resulting fields and particles "Higgs" can be a controversial thing to do. Still, the
fact is that he was a key player in creating the model which, over the last almost-fifty years,
has been refined to explain how the symmetries of the universe are broken in the precise way
that we need in order to get matter.

And that model may be about to be confirmed by experimental evidence!

Congrats to you, Peter Higgs ... and to all the other players in this drama that is theoretical
physics!

Definition: The Higgs boson is a theoretical particle that is part of the Standard Model of
quantum physics.

In the Standard Model, space consists of the Higgs field, with a non-zero value in all space.
There are two neutral and two charged components to the field. One of the neutral and both of
the charged components combine to create the W & Z bosons, which create the weak force,
one of the fundamental forces of physics.

The remaining neutral charge creates the scalar Higgs boson, which has neither charge nor
spin (thus causing it to follow Bose-Einstein statistics, and making it a boson). This is crucial
in using the Standard Model to explain where mass comes from.

The Higgs boson was first theorized in 1964 by the British physicist Peter Higgs, who
expanded on the ideas of American theoretical physicist Phillip Anderson.
In May 2010, evidence came to light at Fermilab which suggested there may be as many as 5
types of different Higgs bosons. At this time, physicists are still trying to figure out the
implication of these results. If the Large Hadron Collider is able to successfully create Higgs
bosons, then it will be possible to test the theoretical predictions in greater details to
understand more about the particle.

The Higgs boson is the only Standard Model particle has not been observed experimentally,
though recent evidence, announced July 4, 2012, indicate that it might have manifested within
the Large Hadron Collider. Further evidence needs to be collected to determine this for sure.

Also Known As: Higgs particle

The Large Hadron Collider (LHC) is a particle accelerator built near Geneva, Switzerland.
Buried approximately 50 to 175 meters underground, the Large Hadron Collider resides inside
a circular tunnel roughly 27 kilometers in circumference, running along the border between
Switzerland and France.

What Does the Large Hadron Collider Do?

The LHC circulates a beam of charged particles (specifically hadrons, probably either protons
or lead ions) through a tube which maintains a continuous vacuum. The particles are guided
through the continuous vacuum within the circular tube using a series of magnetic
superconductors which accelerate and guide the charged particles. In order to maintain the
superconducting properties of the magnets, they remain supercooled near absolute zero by a
massive cryogenic system.

Once the beam reaches its highest energy levels, obtained by steadily increasing the energy as
the beam circles repeatedly through the magnets, it will be maintained in a storage ring. This
is a loop of tunnel where the magnets will keep circulating the beam so that it retains its
kinetic energy, sometimes for hours on end. The beam can then be routed out of the storage
ring to be sent into the various testing areas of the LHC.

The beams are expected to obtain energy levels up to 7 TeV (7 x 1012 electronvolts). Since
two beams will collide with each other, the energy of the collisions are therefore anticipated
to reach 14 TeV from protons.

In addition, by accelerating heavier lead ions, they anticipate collisions with energies in the
range of 1,250 TeV ... energy levels on the order of those obtained only moments after the Big
Bang. (Not the energies obtained during the Big Bang. The TeV energy scale is about 1016
times smaller than the Planck mass energy scale, for example, which Lee Smolin uses as the
top of his particle energy scale in The Trouble with Physics. Presumably, the Big Bang energy
levels would have been somewhere on this Planck energy scale or higher, where the quantum
physics and general relativity aspects of reality both begin to break down.)

What Is the Large Hadron Collider Looking For?

Since the Large Hadron Collider will be having collisions of such high energy, the hope is
that it will release exotic particles which are normally not observed. Any results from the
Large Hadron Collider collisions should have a major impact on our understanding of
physics, either confirming or refuting the projections from the Standard Model of particle
physics.

One major product which is being looked for is the Higgs boson, the last particle from the
Standard Model of particle physics that hasn't been observed.

It's also possible that the LHC will create some indicators of the exotic dark matter, which
makes up nearly 95% of the universe but cannot be directly observed!

Similarly, there might be some evidence of the extra dimensions predicted by string theory.
The fact is that we just don't know until we perform the experiments!

LHC Experiments

There are a variety of ongoing experimental systems built into CERN:

ATLAS (A Toroidal LHC ApparatuS) and CMS (Compact Muon Solenoid) - these
two large, general purpose detectors will be capable of analyzing the particle produced
in LHC collisions. Having two such detectors, designed and operated on different
principles, allows independent confirmation of the results.
ALICE (A Large Ion Collider Experiment) - this experiment will collide lead ions,
creating energies similar to those just after the Big Bang. The hope is to create the
quark-gluon plasma believed to have existed at these energy levels.
LHCb (LHC beauty) - this detector specifically looks for the beauty quark, which
will allow it to study the differences between matter and antimatter, including why our
universe appears to have so much matter and so little antimatter!
TOTEM (TOTal Elastic and diffractive cross section Measurement) - this smaller
detector will analyze "forward particles" which only brush past each other instead of
having head-on collisions. It will be able to measure the size of the proton, for
example, and the luminosity within the LHC.
LHCf (LHC forward) - this small detector also studies forward particles, but analyzes
how the cascades of charged particles within the LHC relates to the cosmic rays that
bombard the Earth from outer space, helping interpret and calibrate studies of the
cosmic rays.

Who Runs the Large Hadron Collider?

The Large Hadron Collider was built by the European Organization for Nuclear Research
(CERN). It is staffed by physicists and engineers from around the world. Nations participating
in the construction and experiments consist of:
Armenia, Australia, Austria, Azerbaijan Republic, Belarus, Belgium, Brazil, Bulgaria,
Canada, China, Croatia, Cyprus, Czech Republic, Denmark, Estonia, Finland, France,
Georgia, Germany, Greece, Hungary, India, Israel, Italy, Japan, Korea, Morocco, Netherlands,
Norway, Pakistan, Poland, Portugal, Romania, Russia, Slovak Republic, Slovenia, Spain,
Sweden, Switzerland, Turkey, Ukraine, United Kingdom, United States, Uzbekistan

How Much Did It Cost?


The building of the accelerator, including manpower and materials, is 3.03 billion euros -
roughly 4 billion U.S. dollars (using conversion from Sept. 4, 2008). On top of this, of course,
is the cost of the various experiments and computing power.

How Is It Going?

The Large Hadron Collider originally went online in September of 2008 and, within about a
week, had to shut down due to a leak in one of the seals that insulated the supercooled
vacuum from the outside world. After about a year of repairs, the LHC went online once
again, this time with much more success. In December 2009 it produced beams with an
energy of 1.18 TeV each, resulting in collisions of 2.36 TeV - the most powerful experiment
ever conducted on Earth. At present, physicists are still analyzing the results of these
collisions to discover what the results mean.

Definition: In particle physics, a boson is a type of particle that obeys the rules of Bose-
Einstein statistics. These bosons also have a quantum spin with contains an integer value,
such as 0, 1, -1, -2, 2, etc. (By comparison, there are other types of particles, called fermions,
that have a half-integer spin, such as 1/2, -1/2, -3/2, and so on.)

What's So Special About a Boson?

Bosons are sometimes called force particles, because it is the bosons that control the
interaction of physical forces, such as electromagnetism and possibly even gravity itself.

The name boson comes from the surname of Indian physicist Satyendra Nath Bose, a brilliant
physicist from the early twentieth century who worked with Albert Einstein to develop a
method of analysis called Bose-Einstein statistics. In an effort to fully understand Planck's law
(the thermodynamics equilibrium equation that came out of Max Planck's work on the
blackbody radiation problem), Bose first proposed the method in a 1924 paper trying to
analyze the behavior of photons. He sent the paper to Einstein, who was able to get it
published ... and then went on to extend Bose's reasoning beyond merely photons, but also to
apply to matter particles.

One of the most dramatic effect of Bose-Einstein statistics is the prediction that bosons can
overlap and coexist with other bosons. Fermions, on the other hand, cannot do this, because
they follow the Pauli Exclusion Principle. (In the About.com Chemistry entry on the Pauli
Exclusion Principle, you'll see that chemists focus primarily on the way the Pauli Exclusion
Principle impacts the behavior of electrons in orbit around an atomic nucleus.) Because of
this, it is possible for photons to become a laser and some matter is able to form the exotic
state of a Bose-Einstein condensate.

Fundamental Bosons

According to the Standard Model of quantum physics, there are a number of fundamental
bosons, which are not made up of smaller particles. This includes the basic gauge bosons, the
particles that mediate the fundamental forces of physics (except for gravity, which we'll get to
in a moment). These four gauge bosons have spin 1 and have all been experimentally
observed:
Photon - Known as the particle of light, photons carry all electromagnetic energy and
act as the gauge boson that mediates the force of electromagnetic interactions.
Gluon - Gluons mediate the interactions of the strong nuclear force, which binds
together quarks to form protons and neutrons and also holds the protons and neutrons
together within an atom's nucleus.
W Boson - One of the two gauge bosons involved in mediating the weak nuclear
force.
Z Boson - One of the two gauge bosons involved in mediating the weak nuclear force.

In addition to the above, there are other fundamental bosons predicted, but without clear
experimental confirmation (yet):

Higgs Boson - According to the Standard Model, the Higgs Boson is the particle that
gives rise to all mass. On July 4, 2012, scientists at the Large Hadron Collider
announced that they had good reason to believe they'd found evidence of the Higgs
Boson. Further research is ongoing in an attempt to get better information about the
particle's exact properties. The particle is predicted to have a quantum spin value of 0,
which is why it is classified as a boson.
Graviton - The graviton is a theoretical particle which has not yet been
experimentally detected. Since the other fundamental forces - electromagnetism,
strong nuclear force, and weak nuclear force - are all explained in terms of a gauge
boson that mediates the force, it was only natural to attempt to use the same
mechanism to explain gravity. The resulting theoretical particle is the graviton, which
is predicted to have a quantum spin value of 2.
Bosonic Superpartners - Under the theory of supersymmetry, every fermion would
have a so-far-undetected bosonic counterpart. Since there are 12 fundamental
fermions, this would suggest that - if supersymmetry is true - there are another 12
fundamental bosons that have not yet been detected, presumably because they are
highly unstable and have decayed into other forms.

Composite Bosons

Some bosons are formed when two or more particles join together to create an integer-spin
particle, such as:

Mesons - Mesons are formed when two quarks bond together. Since quarks are
fermions and have half-integer spins, if two of them are bonded together, then the spin
of the resulting particle (which is the sum of the individual spins) would be an integer,
making it a boson.
Helium-4 atom - A helium-4 atom contains 2 protons, 2 neutrons, and 2 electrons ...
and if you add up all of those spins, you'll end up with an integer every time. Helium-4
is particularly noteworthy because it becomes a superfluid when cooled to ultra-low
temperatures, making it a brilliant example of Bose-Einstein statistics in action.

If you're following the math, any composite particle that contains an even number of fermions
is going to be a boson, because an even number of half-integers is always going to add up to
an integer.

Earlier this month, fans of science may have heard a lot of commotion about the transit of
Venus. The transit of Venus describes an event that happens at most twice a century in which
Venus passes directly in a path directly between the Earth and the Sun. When the moon does
this, it's an eclipse. When Venus does it, it's the transit of Venus.

The commotion was mostly related to the rarity of the event and the fact that it was truly
beautiful, for those who were able to witness it. (Remember, don't stare directly at the sun to
observe any astronomical event.) If you missed it, then you can check out the transit of Venus
images over at About.com Space. Comedian/faux political pundit Stephen Colbert nailed it
when he identified the transit of Venus as:

"Truly one of the most majestic examples of something passing in front of something
else."

But once upon a time, the transit of Venus carried some pretty heavy stakes. In 1769, nations
around the world (well, okay, mostly Europe) sent expeditions to measure the time of the
transit of Venus. Using these calculations, they were able to apply Kepler's laws of planetary
motion to figure out the distance between the Earth and the Sun. By having a more precise
measurement of this distance, it improved a navigator's ability to calculate longitude while at
sea, which ultimately had benefits for warfare and trade. (Back then, it seems, funding pure
scientific endeavors wasn't any more popular than it is now.)

The reason this works is that Kepler's laws govern the motion of planets orbiting the sun,
defining the paths that these planets take and the rates at which they move. Kepler derived
these values from careful observations maintained by his mentor, astronomer Tycho Brahe,
over the course of his lifetime. So, in other words, Kepler knew that his laws applied to the
motion of the planets, but he didn't have a firm theoretical explanation for why this was. It
was later shown that Kepler's laws could be derived from Newton's law of gravity, developed
nearly a century later in 1687, thus providing Kepler's laws with a theoretical framework as
well as the empirical support of evidence. It is always nice to have both, after all.

Kepler's Second Law:


A line from the sun to each planet sweeps out equal areas in equal time.
Source: Wikipedia via GNU Free Documentation License

With these laws firmly in place, the only thing that remained was to make measurements
of the transit of Venus and then apply some angular calculations to figure out the resulting
details about the solar system ... which is where the 1769 transit of Venus expeditions come
in.

The adventure of three of the major expeditions is described in thrilling detail in Mark
Anderson's recent book The Day the World Discovered the Sun. I've got to admit that I'm not
necessarily the sort of guy who jumps at the chance to read travelogues of eighteenth century
scientific expeditions ... but if you are, then it should definitely rank high on your reading list!

A new theoretical model posted onto the ArXiv website by Stephen Hawking and colleagues
proposes something very curious: that space may actually reflect a curious structure that has
more in common with the artistic style of M.C. Escher than the smooth fabric that is often
taken as the cornerstone of general relativity. As described in New Scientist (registration
required):

The images in question are tessellations, arrangements of repeated shapes, such as


the images of interlocking bats and angels seen in Circle Limit IV. Although these are
flat, they serve as "projections" of an alternative geometry called hyperbolic space,
rather like a flat map of the world is a projection of a globe. For example, although the
bats in the flat projection appear to shrink at an exponential rate at the edges, in
hyperbolic space they are all the same size. These distortions in the projection arise
because hyperbolic space cannot lie flat. Instead, it resembles a twisting, wiggly
landscape of saddle-like hills.

Hawking - together with University of California's James Hartle - have been working on an
approach to create a quantum picture of cosmology since the 1980's, and the Escher-like
structure fell out of that. The problem with their approach was that the expansion of the
universe seems to indicate that the universe has a positive cosmological constant, and that
makes their equations unstable and fairly useless.

However, the team (also consisting of Thomas Hertog, from the Institute for Theoretical
Physics at Catholic University of Leuven in Belgium) has recently realized that when a
negative cosmological constant is put into the wave functions of the universe using their
models, it turns out that the model can evolve over time to be very similar to a form of string
theory developed by Juan Maldacena in 1997. This famous "Maldacena conjecture"
stunningly linked the holographic principle to string theory and is seen by many as having the
potential to translate string theory into a format that yields more direct insights, because it
links a string theory model together with a quantum physics model and helps reduce the
complexities arising from the extra dimensions. (Here is a link to the ArXiv copy of
Maldacena's 1997 paper, for those interested in the more technical aspects.)

Hawking, Hertog, and Hartle certainly aren't done with their work ... in many ways, it creates
more questions than it solves. That is, if the idea even works at all!

Already, physicist Lubos Motl has indicated on his blog that he doesn't have particularly high
regard for the possibilities of this work being decisive and, in fact, it looks like possibly the
calculations have been negated in his own comment thread! (Although the Hawking paper on
ArXiv has been edited since then, so it's possible that the sign error he's talking about has
been fixed and the paper is still there.)

The truth is that these models are sometimes introduced and get a bit of fanfare, but usually
they don't really end up going anywhere ... or, if they do go somewhere, it takes years for
them to get there. This model is so new that there's very little discussion of it yet, and it's
through that discussion that scientists will help to determine whether or not it has any value.
At present, I'm suspect, though ... and I'd urge caution from anyone before they begin
declaring that our universe has some bizarre geometric structure that isn't experimentally
verified!

Still, if the model could be extended and refined to match experimental data, and if it then
makes predictions that could be further tested, it might represent some true progress at
creating a theory that combines quantum physics with general relativity. And, if that works,
then we'll have discovered that, once again, the universe really doesn't look the way that we
expect it to ... and that would be a pretty neat discovery.

Today I received an e-mail containing the following question:

What is the apropriate [sic] way to spread my revoloutionary [sic] message on how to
transform modern physics?

When you're at all in the public eye for physics, you get these sorts of requests on a fairly
regular basis. I spoke a while back about a rather deft way cosmologist Sean Carroll discussed
dealing with physics cranks. Still, this particular individual was polite, so I decided to respond
at length, offering some of the insights taken from Carroll's earlier comments on this topic:

Not to be glib, but if you want to have the message taken seriously by the physics community,
you'll need to get a doctorate in physics, so that you have a full and complete understanding
of existing physics. You can then formulate your revolutionary framework within the language
and terminology of the physics community as well as craft experiments that will provide
evidence for your approach over the approaches of others. For example, how does your
theory account for the array of fundamental particles which have been experimentally
identified, as well as their individual properties? What sort of results does your theory predict
for various types of particle collisions? That sort of thing.

If you are located near a university, another (less costly) option would be to contact the
physics department there and see if they have any physics seminars that are open to the
public. Researchers frequently present their findings at universities and going to these things
(or to a more established scientific conference, but these things tend to be more costly) will
give you an idea of the amount of rigor and detail that the physics community requires in
presenting their results.

Consider a timeline of the major work by Albert Einstein:

1895 - Enters the Swiss Federal Polytechnic (i.e. begins rigorous study of
physics)
1905 - Publishes initial inklings of special relativity, photoelectric effect, and
Brownian motions; Earns PhD
1915 - Completes work extending special relativity into a full theory of gravity,
general relativity
1919 - Observational evidence from an eclipse confirms Einstein's predictions;
general relativity is largely accepted by the physics community
1921 - Albert Einstein receives a Nobel Prize for his work on the photoelectric
effect

At best, the argument could be made that Einstein transformed physics after 10 years of
intently studying physics, but if he'd never gone further than those original inklings of special
relativity he wouldn't be remembered much today. Rather it took not only another decade of
work, but rather about 15 more years of active engagement by a large segment of the physics
community - Max Planck, Arthur Eddington, & Hermann Minkowski spring to mind
immediately - in order to transform his correct intuitions into a workable theory.
The idea that anyone today would be able to do it with less effort than this is highly
unrealistic and likely misguided.

One further note: The goal of going to physics seminars should genuinely be to learn about
physics, not so that you can corner some poor unsuspecting postdoctoral student and explain
to him or her how you're more clever than all of the brilliant physicists that they've spent the
last decade or so of their life studying. This will not work. It's fine for you to strike up
conversations and build relationships with physicists, but that relationship should be built
from you trying to learn what they know. This tip from Stephen Covey's 7 Habits of Highly
Effective People is useful here:

"Seek first to understand, then to be understood."

On a recent episode of the podcast The Skeptic's Guide to the Universe, there was a great
interview with physicist Sean Carroll. (Here's a link directly to that episode's show notes.)
Sean is a cosmologist who explores fundamental mysteries of the universe, such as the nature
of time, which he explored in his book From Eternity to Here: The Quest for the Ultimate
Theory of Time.

Toward the end of the interview, Sean is asked whether or not he's approached by "cranks
trying to explain to you their physics of everything." In response, he said he'd already been
contacted once since the interview began.

I can certainly relate to this. I, too, get a lot of e-mails from people who believe that they have
found the missing theoretical nugget that explains the mysteries of the universe. Here are
Carroll's thoughts on when these sorts of messages, which are roughly in accord with my
own:

If it's a question, especially if it's from a student younger than 25 or someone older than 70,
then I'll try to answer it, because those people are most likely to have questions. In between
those ages, people are most likely to tell me answers and say here's my theory and I hope that
someone will finally listen to me.... I don't pay much attention to those....

The crank phenomenon is sociologically incredibly interesting. I think that there's essentially
nothing that it adds to the progress of science, because science is hard.... I don't think that the
problem is people who just make stuff up. It's people who think they can get the right answer
just by thinking about it without asking what anyone else has ever thought about.

One of the ways I put it to them face to face is, "You're asking me to spend time reading and
understanding your theory. So before I do that, I would like you to take the time to read and
understand my theory, which is every working scientist's theory, namely the Standard Model
of Particle Physics, based on Quantum Field Theory, plus general relativity." Very, very few
of these people have really mastered the basics that any graduate student who gets a Ph.D. in
this field has mastered a long time ago.

There are some fields that attract people like this, the ones where the questions are easy
enough to phrase that people can take a stab at answering them. What is gravity? What's
healthy for you? How did evolution work? You know, if you have a questions about "What is
the cross-section of a pi-meson that interacts with a k-meson?" ... No cranks are really
working that out, because they don't know what it means and they don't care about it....
Einstein did us a great disservice by failing to get a job early and going to the patent office.
That gives a lot of license to people to say, "Einstein today would never have succeeded."
What they don't appreciate is that Einstein understood the physics of his time better than
anyone. He was not an outsider in that sense. He was very much working within the system.
He just had trouble getting a job, which a lot of good people do.

There's a lot of great stuff in that passage. The discussion is largely predicated on Margaret
Wertheim's Physics on the Fringe: Smoke Rings, Circlons, and Alternative Theories of
Everything. This is the "sociologically intriguing" aspect that Carroll mentions, though he
doesn't feel these alternative theories really have much worth toward driving science progress.

I especially like Carroll's concluding discussion about Albert Einstein. The "patent clerk"
mythology around Einstein is compelling and pervasive, so it's easy to overlook the fact that
he did already have a Ph.D. and was fully educated about the physics theories of the day. He
wasn't some self-taught bumpkin who just happened to develop an insight, but rather a
brilliant, university-educated theoretical physicist who, as Carroll points out, had trouble
finding a university position (in part, so the story goes, because one of his professors disliked
him and gave him poor recommendations)

For any fringe or outsider physicists who are thinking of sending me their pet theories,
however, I will point out a simple reason not to do so:

I have an undergraduate degree in physics and work as a science writer, not an active
theoretical physicist. Since I did not go to graduate school for physics, I have not mastered the
ins and outs of quantum field theory, general relativity, or the rest of the Standard Model of
Particle Physics. I firmly recognize the limits of my own knowledge and, upon reaching those
limits, I consult with physicists who have Ph.D.'s to help me sort out the confusion.

Further, I have neither the the time nor inclination to learn a new, untested theory of the
universe ... especially one which involves a whole new mathematical formalism, as many of
these armchair theorists seem to start on their own from scratch. I'm perfectly happy to watch
from the sidelines as the professionals handle this and then report on the results.

However, we do have an active Physics Forum where you can post such ideas, and I
encourage you to do so!

Science has at times sought to reach further than it can actually deliver, such as in the case of
attempts to build a perpetual motion machine.

Some recent reports seem to be dancing dangerously close to this line, as they've reported on
the invention of a new propulsion method for spacecraft, based on the work of a teenage
Egyptian student named Aisha Mustafa:

OnIslam.com - Egyptian Student Invents New Propulsion Method, May 16, 2012
Fast Company - Mustafa's Space Drive: An Egyptian Student's Quantum Physics
Invention, May 21, 2012
Crazy Engineers - New Propulsion Method Using Dynamic Casimir Effect Invented
by Egyptian Student, May 27, 2012
Gizmodo - Egyptian Teenager Invents New Space Propulsion System Based on
Quantum Mechanics, May 29, 2012
The process supposedly uses a method known as the dynamic Casimir effect, in
which vacuum energy actually results in a force acting on metallic plates. The effect is real,
but can it be harnessed to create an effective propulsion system? Mustafa seems to be
applying for a patent for her method and is seeking funding to turn it into a viable method of
space propulsion.

A patent alone doesn't mean much, of course. After all, there's even a
patent on a time machine (as described in Dr. Ronald Mallett's non-fiction book Time
Traveler), but time machines (or even time communicators, since that's closer to the actual
description of Mallett's device) still haven't been created yet. Just applying for a patent on
something doesn't prove that the fundamental scientific concept is actually viable.

Only time will tell whether or not Mustafa's method is more science than fantasy.

Comments (1)
See All Posts
Share

Prev
Next

Leave a Comment

Comments

June 4, 2012 at 11:13 am


(1) TeeK says:

Surely things like this cannot be called perpetual motion, since they are simply trying
to tap into a source of energy? This might be more exotic, but it seems to be using an
existing source of energy in the same way geothermal does?

For me the problem seems to be one of scale. The effect is real, but very small, could
enough of it ever be collected to provide a useful source of propulsion Or energy?
Think of a motor powered by it in the same way as one of those glass spheres with a
little windmill inside that turns round because one side of the vanes are painted
white, while the other sides are painted black. Who would have thought that the
pressure of the photons could have caused something so large to rotate? Well, what if
useful energy could be obtained from a motor using the Casimir effect? Oh, and I
claim all rights to the idea, since I have copyright by publishing this comment!

One of the weirdest facts about quantum physics is that particles are constantly springing into
and out of existence all around us. Even within "empty space," which seems like it should
contain no energy at all, there are virtual particle pairs that manifest for a moment before
annihilating each other.

This means that energy is contained even in the empty vacuum of space itself, a fact which
yields all sort of strange behavior. This vacuum energy may explain the dark energy that
cosmologists observe, but the problem is that the theoretical calculations and experimental
observations are off by quite a lot. If the theoretical calculations were correct, there'd be a lot
more vacuum energy (sometimes called vacuum pressure) and the universe's acceleration
would be a lot faster ... so fast that, in fact, the universe probably wouldn't have been able to
form galaxies, stars, and planets.

It's precisely this sort of behavior that physicist Brian Greene refers to as quantum jitters in
his popular science books. In fact, there's a very high likelihood that this sort of "energy from
nothing" aspect of quantum physics provides the physical basis for the formation of the
universe. The Big Bang theory describes how the universe proceeded from the moment of its
creation, but doesn't actually dictate how that creation occurred. Actually, once you have the
laws of quantum physics in place, the idea of manifesting something from nothing becomes
relatively commonplace, as described in Lawrence Krauss' A Universe From Nothing and
Hawking & Mlodinow's The Grand Design. Of course, general relativity is also needed, for
that universe to begin expanding ... at least until we figure out a theory of quantum gravity.

Virtual particles are important in astrophysics for at least one other reason: they provide the
basis for the Hawking radiation, the radiation that should be emitted by black holes.

There are a lot of interesting theories in physics. Matter exists as a state of energy, while waves of
probability spread throughout the universe. Existence itself may exist as only the vibrations on
microscopic, trans-dimensional strings. Here are some of the most interesting theories, to my mind,
in modern physics (in no particular order, despite the enumeration).

Wave Particle Duality

Matter and light have properties of both waves and particles simultaneously. The results of quantum
mechanics make it clear that waves exhibit particle-like properties and particles exhibit wave-like
properties, depending on the specific experiment. Quantum physics is therefore able to make
descriptions of matter and energy based on wave equations that relate to the probability of a particle
existing in a certain spot at a certain time.

Last weekend, I discussed using popular culture to present science. Yesterday, PBS's NOVA
physics blog, "The Nature of Reality," published an essay from me about how science fiction
influences scientists.

In that article, I discuss the ways that science fiction can inspire science, making the point that
the science fiction of H.G. Wells was anticipating Einstein's relativity concepts while other
scientists were thinking that science was about done with its job. (This idea was pointed out in
Lawrence Krauss' book Hiding in the Mirror.)

In fact, just a few years later Lord Kelvin made a speech where described "two clouds" that
were on the horizon of physics at the turn of the century:
The inability to measure the luminous ether in the Michelson-Morley experiment
The inability to solve the blackbody radiation problem in the ultraviolet range, called
the ultraviolet catastrophe

It's safe to say that Lord Kelvin's "two clouds" speech is among the greatest scientific under-
estimations in history. While Kelvin considered this to be a mere measurement issue, instead
it turned out to set the stage for a radical transformation in our way of thinking about the
universe, rivaled only by the scientific revolution itself.

Today, it's easy to once again think that maybe science has all of the answers. I'll occasionally
hear people who seem to think that science is in its final days, just doing some adjustments on
the decimal points and then we'll be in an age where there are no new scientific discoveries to
be made. Science will be finished at that point, these people claim.

Let me assure you that we're far from having all of the answers. Every discovery made by
science unlocks more doors of the imagination.

Consider this: At best, physicists can claim that we understand 4% of the universe really well.
The rest is shrouded in mystery, because it consists of the poorly-understood dark matter
and dark energy that make up the other 96%.

There are still worlds to conquer and scientific secrets to unlock, and not just in the science
fiction novels. Anyone who's trying to convince you that science currently has all the answers
is missing out on how cool some of the questions are.

Science is nowhere close to ending.

What Happened?:

On Friday, April 27, 1900, the British physicist Lord Kelvin gave a speech entitled
"Nineteenth-Century Clouds over the Dynamical Theory of Heat and Light," which began:

The beauty and clearness of the dynamical theory, which asserts heat and light to be modes of
motion, is at present obscured by two clouds.

Kelvin went on to explain that the "clouds" were two unexplained phenomena, which he
portrayed as the final couple of holes that needed to be filled in before having a complete
understanding of the thermodynamic and energy properties of the universe, explained in
classical terms of the motion of particles.

This speech, together with other comments attributed to Kelvin (such as by physicist Albert
Michelson in a 1894 speech) indicate that he strongly believed the main role of physics in that
day was to just measure known quantities to a great degree of precision, out to many decimal
places of accuracy.

What are the Clouds?:

The "clouds" to which Kelvin was referring were:


1. The inability to detect the luminous ether, specifically the failure of the Michelson-
Morley experiment
2. The black body radiation effect known as the ultraviolet catastrophe

Why This Matters:

References to this speech have become somewhat popular for one very simple reason: Lord
Kelvin was about as wrong as he could possibly have been. Instead of minor details that had
to be worked out, Kelvin's two "clouds" instead represented fundamental limits to a classical
approach to understanding the universe. Their resolution introduced whole new (and clearly
unanticipated) realms of physics, known collectively as "modern physics."

The Cloud of Quantum Physics:

In fact, Max Planck solved the black body radiation problem in 1900. (Presumably after
Kelvin gave his speech.) In doing so, he had to invoke the concept of limitations on the
allowed energy of emitted light. This concept of a "light quanta" was seen as a simple
mathematical trick at the time, necessary to resolve the problem, but it worked. Planck's
approach precisely explained the experimental evidence resulting from heated objects in the
black-body radiation problem.

However, in 1905, Einstein took the idea further and used the concept to also explain the
photoelectric effect. Between these two solutions, it became clear that light seemed to exist as
little packets (or quanta) of energy (or photons, as they would later come to be called).

Once it became clear that light existed in packets, physicists began to discover that all kinds
of matter and energy existed in these packets, and the age of quantum physics began.

The Cloud of Relativity:

The other "cloud" that Kelvin mentioned was the failure of the Michelson-Morley
experiments to discuss the luminous ether. This was the theoretical substance that physicists
of the day believed permeated the universe, so that light could move as a wave. The
Michelson-Morley experiments had been a rather ingenious set of experiments, based on the
idea that light would move at different speeds through the ether depending on how the Earth
was moving through it. They constructed a method to measure this difference ... but it hadn't
worked. It appeared that the direction of light's motion had no bearing on the speed, which
didn't fit with the idea of it moving through a substance like the ether.

Again, though, in 1905 Einstein came along and set the ball rolling on this one. He laid out
the premise of special relativity, invoking a postulate that light always moved at a constant
speed. As he developed the theory of relativity, it became clear that the concept of the
luminous ether was no longer particularly helpful, so scientists discarded it.

References by Other Physicists:

Popular physics books have frequently referenced this event, because it makes it clear that
even very knowledgeable physicists can be overcome by overconfidence at the extent of their
field's applicability.
In his book The Trouble with Physics, theoretical physicist Lee Smolin says the following
about the speech:

William Thomson (Lord Kelvin), an influential British physicist, famously proclaimed that
physics was over, except for two small clouds on the horizon. These "clouds" turned out to be
the clues that led us to quantum theory and relativity theory.

Physicist Brian Greene also references the Kelvin speech in The Fabric of the Cosmos:

In 1900, Kelvin himself did note that "two clouds" were hovering on the horizon, one to do
with properties of light's motion and the other with aspects of the radiation objects emit when
heated, but there was a general feeling that these were mere details, which, no doubt, would
soon be addressed.

Within a decade, everything changed. As anticipated, the two problems Kelvin had raised
were promptly addressed, but they proved anything but minor. Each ignited a revolution, and
each requires a fundamental rewriting of nature's laws.

Sources:

The lecture is supposedly available in the 1901 book The London, Edinburgh and Dublin
Philosophical Magazine and Journal of Science, Series 6, volume 2, page 1 ... if you happen
to have it lying around. Otherwise, I've found this Google Books edition.

Definition: Dark energy is a hypothetical form of energy that permeates space and exerts a negative
pressure, which would have gravitational effects to account for the differences between the
theoretical and observational results of gravitational effects on visible matter. Dark energy is not
directly observed, but rather inferred from observations of gravitational interactions between
astronomical objects, along with dark matter.

The term "dark energy" was coined by the theoretical cosmologist Michael S. Turner.

Dark Energy's Predecessor

Before physicists knew about dark energy, a cosmological constant, was a feature of Einstein's
original general relativity equations that caused the universe to be static. When it was realized the
universe was expanding, the assumption was that the cosmological constant had a value of zero ... an
assumption that remained dominant among physicists and cosmologists for many years.

Discovery of Dark Energy

In 1998, two different teams - the Supernova Cosmology Project and the High-z Supernova Search
Team - both failed at their goal of measuring the deceleration of the universe's expansion. In fact,
they measured not only a deceleration, but a totally unexpected acceleration. (Well, almost totally
unexpected: Stephen Weinberg had made such a prediction once)
Further evidence since 1998 has continued to support this finding, that distant regions of the
universe are actually speeding up with respect to each other. Instead of a steady expansion, or
a slowing expansion, the expansion rate is getting faster, which means that Einstein's original
cosmological constant prediction manifests in today's theories in the form of dark energy.

The latest findings indicate that over 70% of the universe is composed of dark energy. In fact,
only about 4% is believed to be made up of ordinary, visible matter. Figuring out more details
about the physical nature of dark energy is one of the major theoretical and observational
goals of modern cosmologists.

Also Known As: vacuum energy, vacuum pressure, negative pressure, cosmological constant

Most people that I know are absolutely disgusted by politicians. No matter what a person's
political affiliation, if you're an intelligent person then you want decisions to be based on a
firm understanding of the reality of the situation. There may be perfectly valid disagreements
about how best to address the problems within this reality, of course, but ultimately no one
wants decisions to be made on a faulty understanding of reality.

And how do we understand reality? Well, the most well-established and consistent method for
understanding the working of reality is science.

Why is this so? Basically it is because science (and, so far as I know, only science) has built
into it a systematic method to continually question its base assumptions, to really get at the
fundamental truths about how reality functions. Sean Carroll put this eloquently in a recent
YouTube video, when he describes skepticism:

Scientists are taught that we should be our own theories' harshest critics. Scientists spend all
of their time trying to disprove their favorite ideas. This is a remarkable way of doing things
that is a little bit counter-intuitive, but helps us resist the allure of wishful thinking.

Unfortunately, virtually no politicians approach things this way. Political thinking typically
starts with a conviction and then proceeds to amass evidence that supports that conviction ...
and disregards evidence that conflicts with their conviction.

However, when really considering the policies that we want implemented, people believe that
scientific reality is a fairly good thing to consider. Some science enthusiasts believe that
religion and religious people are inherently anti-scientific, but a recent poll indicates that this
isn't really the case. Even people who self-identify as religious provide a strong indication that
they want a scientific-based debate between presidential candidates.

In fact, a science debate came in third among presidential debate themes, right behind
economy/taxes and foreign policy/national security, but well ahead of themes such as
faith/values or the environment.

Do you believe that there should be a Science Debate as part of the 2012 presidential election,
focusing on science-themed areas such as innovation, healthcare, and energy policy? Is this a
realistic proposition, or will both parties run from such an enterprise? Offer your thoughts in
the comments below.
Most people that I know are absolutely disgusted by politicians. No matter what a person's
political affiliation, if you're an intelligent person then you want decisions to be based on a
firm understanding of the reality of the situation. There may be perfectly valid disagreements
about how best to address the problems within this reality, of course, but ultimately no one
wants decisions to be made on a faulty understanding of reality.

And how do we understand reality? Well, the most well-established and consistent method for
understanding the working of reality is science.

Why is this so? Basically it is because science (and, so far as I know, only science) has built
into it a systematic method to continually question its base assumptions, to really get at the
fundamental truths about how reality functions. Sean Carroll put this eloquently in a recent
YouTube video, when he describes skepticism:

Scientists are taught that we should be our own theories' harshest critics. Scientists spend all
of their time trying to disprove their favorite ideas. This is a remarkable way of doing things
that is a little bit counter-intuitive, but helps us resist the allure of wishful thinking.

Unfortunately, virtually no politicians approach things this way. Political thinking typically
starts with a conviction and then proceeds to amass evidence that supports that conviction ...
and disregards evidence that conflicts with their conviction.

However, when really considering the policies that we want implemented, people believe that
scientific reality is a fairly good thing to consider. Some science enthusiasts believe that
religion and religious people are inherently anti-scientific, but a recent poll indicates that this
isn't really the case. Even people who self-identify as religious provide a strong indication that
they want a scientific-based debate between presidential candidates.

In fact, a science debate came in third among presidential debate themes, right behind
economy/taxes and foreign policy/national security, but well ahead of themes such as
faith/values or the environment.

Do you believe that there should be a Science Debate as part of the 2012 presidential election,
focusing on science-themed areas such as innovation, healthcare, and energy policy? Is this a
realistic proposition, or will both parties run from such an enterprise? Offer your thoughts in
the comments below.

A few weeks ago, I discussed one issue with "physics cranks," as referenced on a podcast by
renowned astrophysicist and science communicator Sean Carroll. His main objection was that
many of the people who believe they're revolutionizing the fundamental theories of physics
do not really understand the existing theories, so they can't really quantify how their theories
would resolve the problems which existing theories can solve.

The only one of them I've ever heard of is Fred Alan Wolf, who is well known for his work in
trying to relate human consciousness to quantum theory. He is also quoted in both the video
and book "The Secret" and here's what I said about this on my website
(http://physics.about.com/od/scienceandreligionbooks/p/secretphysicserrors.htm):
"Two physicists, Dr. Fred Alan Wolf and Dr. John Hagelin, are directly quoted in the book.
While both are respected and accomplished in various circles, their stances on these issues of
physics and consciousness are definitely not mainstream. The attempt within The Secret to
make these controversial views appear to be a consensus of quantum physicists is
misleading."
What will very likely happen in the ALICE AND THE QUANTUM CAT book is that there
will be a lot of very real science. In fact, probably about 90% of the science will be perfectly
valid and correct. The other 10% will contain assumptions or conclusions that are at odds with
the mainstream physics community and have no direct experimental support. The problem is
that the authors will make no real effort to make it clear that this 10% is any different from the
other 90% ... they'll make it sound like physicists are completely confident about everything
they're saying.
http://znfl.blogspot.com/2006/12/wolf-in-sheeps-clothing.html

However, there is another problem that is nearly as pervasive. This comes from people who
have legitimate physics credentials and do (or, at least, should) understand the complexities
surrounding a scientific concept, but when they communicate these concepts they do not
make these complexities clear.

I was reminded of this when a friend recently approached me about a book she'd come across,
which claimed to discuss the way quantum physics affects our lives. There was a list of
physicists who were involved in the book and she wanted to know if it was trustworthy. The
only name I recognized was Dr. Fred Alan Wolf, who is well known for his work in trying to
relate human consciousness to quantum physics. He is one of the physicists who is quoted in
both the book and film of The Secret. Here is a quote from my article "Physics Errors in The
Secret":

Two physicists, Dr. Fred Alan Wolf and Dr. John Hagelin, are directly quoted in the book.
While both are respected and accomplished in various circles, their stances on these issues of
physics and consciousness are definitely not mainstream. The attempt within The Secret to
make these controversial views appear to be a consensus of quantum physicists is misleading.

I actually have a great deal of sympathy for those who investigate the foundations of quantum
physics and even the role of the observer ... when it's handled carefully, as it was in the
fantastic book The Quantum Enigma. This was a legitimate discussion of the issues and
complexities involved in observers in quantum physics, such as the role of measurement in
the quantum double slit experiment. They didn't take a bit of uncertainty and try to use it to
justify things well outside the realm of what they were discussion.

And that, ultimately, is one of the major problems when quantum physics gets referenced in
many books aimed at a popular audience, especially those that seem inclined to imply some
sort of mystical result from quantum physics. Here is the response I provided to my friend
about her book question, and I think this tends to apply broadly to these sorts of physics
books:

What will very likely happen in this book is that there will be a lot of very real science. In fact,
probably about 90% of the science will be perfectly valid and correct. The other 10% will
contain assumptions or conclusions that are at odds with the mainstream physics community
and have no direct experimental support. The problem is that the authors will make no real
effort to make it clear that this 10% is any different from the other 90% ... they'll make it
sound like physicists are completely confident about everything they're saying.
There are a lot of curious properties of quantum physics, such as the ways to resolve quantum
entanglement issues, such as those that show up in the EPR paradox. And some very
respected physicist (most notably Roger Penrose) have ventured speculations in this area. The
problem is how carefully these speculations are framed and the degree to which they make
their uncertainty clear.

A scientist who is truly trying to examine the possibility of these things will make it
absolutely clear that they are speculating. A true scientist will qualify their speculative claims,
quantifying their uncertainty as much as possible. This is, to a large degree, the very essence
of scientific inquiry.

Those who make it sound like such speculations are completely resolved questions aren't
interested in furthering knowledge. They're trying to sell you something, not practice real
science.

This sort of reminds me of a quote from mathematician and philosopher Bertrand Russel:

The whole problem with the world is that fools and fanatics are always so certain of
themselves, but wiser people so full of doubts.

Newton's law of gravity defines the attractive force between all objects that possess mass.
Understanding the law of gravity, one of the fundamental forces of physics, offers profound
insights into the way our universe functions.

The Proverbial Apple

The famous story that Isaac Newton came up with the idea for the law of gravity by having an
apple fall on his head is not true, although he did begin thinking about the issue on his
mother's farm when he saw an apple fall from a tree. He wondered if the same force at work
on the apple was also at work on the moon. If so, why did the apple fall to the Earth and not
the moon?

Along with his Three Laws of Motion, Newton also outlined his law of gravity in the 1687
book Philosophiae naturalis principia mathematica (Mathematical Principles of Natural
Philosophy), which is generally referred to as the Principia.

Johannes Kepler (German physicist, 1571-1630) had developed three laws governing the
motion of the five then-known planets. He did not have a theoretical model for the principles
governing this movement, but rather achieved them through trial and error over the course of
his studies. Newton's work, nearly a century later, was to take the laws of motion he had
developed and apply them to planetary motion to develop a rigorous mathematical framework
for this planetary motion.

Gravitational Forces

Newton eventually came to the conclusion that, in fact, the apple and the moon were
influenced by the same force. He named that force gravitation (or gravity) after the Latin
word gravitas which literally translates into "heaviness" or "weight."
In the Principia, Newton defined the force of gravity in the following way (translated from
the Latin):

Every particle of matter in the universe attracts every other particle with a force that is
directly proportional to the product of the masses of the particles and inversely proportional
to the square of the distance between them.

Mathematically, this translates into the force equation shown to the right. In this equation, the
quantities are defined as:

Fg = The force of gravity (typically in newtons)


G = The gravitational constant, which adds the proper level of proportionality to the
equation. The value of G is 6.67259 x 10-11 N * m2 / kg2, although the value will
change if other units are being used.
m1 & m1 = The masses of the two particles (typically in kilograms)
r = The straight-line distance between the two particles (typically in meters)

Interpreting the Equation

This equation gives us the magnitude of the force, which is an attractive force and therefore
always directed toward the other particle. As per Newton's Third Law of Motion, this force is
always equal and opposite. Click on the picture to see an illustration of two particles
interacting through gravitational force.

In this picture, you will see that, despite their different mass and sizes, they pull on each other
with equivalent force. Newton's Three Laws of Motion give us the tools to interpret the
motion caused by the force and we see that the particle with less mass (which may or may not
be the smaller particle, depending upon their densities) will accelerate more than the other
particle. This is why light objects fall to the Earth considerably faster than the Earth falls
toward them. Still, the force acting on the light object and the Earth is of identical magnitude,
even though it doesn't look that way.

It is also significant to note that the force is inversely proportional to the square of the
distance between the objects. As objects get further apart, the force of gravity drops very
quickly. At most distances, only objects with very high masses such as planets, stars, galaxies,
and black holes have any significant gravity effects.

Center of Gravity

In an object composed of many particles, every particle interacts with every particle of the
other object. Since we know that forces (including gravity) are vector quantities, we can view
these forces as having components in the parallel and perpendicular directions of the two
objects. In some objects, such as spheres of uniform density, the perpendicular components of
force will cancel each other out, so we can treat the objects as if they were point particles,
concerning ourselves with only the net force between them.

The center of gravity of an object (which is generally identical to its center of mass) is useful
in these situations. We view gravity, and perform calculations, as if the entire mass of the
object were focused at the center of gravity. In simple shapes - spheres, circular disks,
rectangular plates, cubes, etc. - this point is at the geometric center of the object.
This idealized model of gravitational interaction can be applied in most practical applications,
although in some more esoteric situations such as a non-uniform gravitational field, further
care may be necessary for the sake of precision.

Sir Isaac Newton's law of universal gravitation (i.e. the law of gravity) can be restated into the
form of a gravitational field, which can prove to be a useful means of looking at the situation.
Instead of calculating the forces between two objects every time, we instead say that an object
with mass creates a gravitational field around it. The gravitational field is defined as the force
of gravity at a given point divided by the mass of an object at that point, as depicted to the
right.

You'll notice that here both g and Fg have arrows above them, denoting their vector nature.
The source mass M is now capitalized. The r at the end of the rightmost two formulas has a
carat (^) above it, which means that it is a unit vector in the direction from the source point of
the mass M. Since the vector points away from the source while the force (and field) are
directed toward the source, a negative is introduced to make the vectors point in the correct
direction.

This equation depicts a vector field around M which is always directed toward it, with a value
equal to an object's gravitational acceleration within the field. The units of the gravitational
field are m/s2.

When an object moves in a gravitational field, work must be done to get it from one place to another
(starting point 1 to end point 2). Using calculus, we take the integral of the force from the starting
position to the end position. Since the gravitational constants and the masses remain constant, the
integral turns out to be just the integral of 1 / r2 multiplied by the constants.

We define the gravitational potential energy, U, such that W = U1 - U2. This yields the
equation to the right, for the Earth (with mass mE. In some other gravitational field, mE would
be replaced with the appropriate mass, of course.

Gravitational Potential Energy on Earth

On the Earth, since we know the quantities involved, the gravitational potential energy U can be
reduced to an equation in terms of the mass m of an object, the acceleration of gravity (g = 9.8 m/s),
and the distance y above the coordinate origin (generally the ground in a gravity problem). This
simplified equations yields a gravitational potential energy of:

U = mgy

There are some other details of applying gravity on the Earth, but this is the relevant fact with
regards to gravitational potential energy.

Notice that if r gets bigger (an object goes higher), the gravitational potential energy increases
(or becomes less negative). If the object moves lower, it gets closer to the Earth, so the
gravitational potential energy decreases (becomes more negative). At an infinite difference,
the gravitational potential energy goes to zero. In general, we really only care about the
difference in the potential energy when an object moves in the gravitational field, so this
negative value isn't a concern.
This formula is applied in energy calculations within a gravitational field. As a form of
energy, gravitational potential energy is subject to the law of conservation of energy.

In Albert Einstein's 1905 theory (special relativity), he showed that among inertial frames of
reference there was no "preferred" frame. The development of general relativity came about,
in part, as an attempt to show that this was true among non-inertial (i.e. accelerating) frames
of reference as well.

Evolution of General Relativity

In 1907, Einstein published his first article on gravitational effects on light under special
relativity. In this paper, Einstein outlined his "equivalence principle," which stated that
observing an experiment on the Earth (with gravitational acceleration g) would be identical to
observing an experiment in a rocket ship that moved at a speed of g. The equivalence
principle can be formulated as:
we [...] assume the complete physical equivalence of a gravitational field and a
corresponding acceleration of the reference system.

as Einstein said or, alternately, as one Modern Physics book presents it:

There is no local experiment that can be done to distinguish between the effects of a uniform
gravitational field in a nonaccelerating inertial frame and the effects of a uniformly
accelerating (noninertial) reference frame.

A second article on the subject appeared in 1911, and by 1912 Einstein was actively working
to conceive of a general theory of relativity that would explain special relativity, but would
also explain gravitation as a geometric phenomenon.

In 1915, Einstein published a set of differential equations known as the Einstein field
equations. Einstein's general relativity depicted the universe as a geometric system of three
spatial and one time dimensions. The presence of mass, energy, and momentum (collectively
quantified as mass-energy density or stress-energy) resulted in a bending of this space-time
coordinate system. Gravity, therefore, was movement along the "simplest" or least-energetic
route along this curved space-time.

The Math of General Relativity

In the simplest possible terms, and stripping away the complex mathematics, Einstein found
the following relationship between the curvature of space-time and mass-energy density:

(curvature of space-time) = (mass-energy density) * 8 pi G / c4


The equation shows a direct, constant proportion. The gravitational constant, G, comes from
Newton's law of gravity, while the dependence upon the speed of light, c, is expected from the
theory of special relativity. In a case of zero (or near zero) mass-energy density (i.e. empty
space), space-time is flat. Classical gravitation is a special case of gravity's manifestation in a
relatively weak gravitational field, where the c4 term (a very big denominator) and G (a very
small numerator) make the curvature correction small.
Again, Einstein didn't pull this out of a hat. He worked heavily with Riemannian geometry (a
non-Euclidean geometry developed by mathematician Bernhard Riemann years earlier),
though the resulting space was a 4-dimensional Lorentzian manifold rather than a strictly
Riemannian geometry. Still, Riemann's work was essential for Einstein's own field equations
to be complete.

What Does General Relativity Mean?

For an analogy to general relativity, consider that you stretched out a bedsheet or piece of
elastic flat, attaching the corners firmly to some secured posts. Now you begin placing things
of various weights on the sheet. Where you place something very light, the sheet will curve
downward under the weight of it a little bit. If you put something heavy, however, the
curvature would be even greater.

Assume there's a heavy object sitting on the sheet and you place a second, lighter, object on
the sheet. The curvature created by the heavier object will cause the lighter object to "slip"
along the curve toward it, trying to reach a point of equilibrium where it no longer moves. (In
this case, of course, there are other considerations -- a ball will roll further than a cube would
slide, due to frictional effects and such.)

This is similar to how general relativity explains gravity. The curvature of a light object
doesn't affect the heavy object much, but the curvature created by the heavy object is what
keeps us from floating off into space. The curvature created by the Earth keeps the moon in
orbit, but at the same time the curvature created by the moon is enough to affect the tides.

Proving General Relativity

All of the findings of special relativity also support general relativity, since the theories are
consistent. General relativity also explains all of the phenomena of classical mechanics, as
they too are consistent. In addition, several findings support the unique predictions of general
relativity:

Precession of perihelion of Mercury


Gravitational deflection of starlight
Universal expansion (in the form of a cosmological constant)
Delay of radar echoes
Hawking radiation from black holes

One of the biggest news stories of 2011 was the tentative announcement from the OPERA
experimental group at CERN that they had detected neutrinos which seemed to be moving
faster than the speed of light. When I first mentioned this topic back in September ("Can the
Speed of Light Law Be Broken?"), I explained why the equations from the theory of relativity
indicate that the speed of light set an upper limit on how fast any object in the universe should
be moving.

Skepticism was certainly warranted on this claim, as on any claim that would shake the
foundations of science.
In science, the way skepticism works is that efforts are made to see if the claim holds up in
different circumstances. In November, it was revealed that a series of follow-up experiments
were consistent with the earlier findings (see "Speed of Light Update").

It was beginning to look like maybe this might be a real result, which would have been
thrilling for physicists. Contrary to what some in the media portrayed, there was no real
reason for physicists to be predisposed against this idea. If anything, they would be inclined to
believe it. The discovery of actual evidence of something moving faster than the speed of
light would open whole new areas of investigation! Many physicists get into the field with the
goal of discovering some sort of science fiction concept like this.

Alas, more recent evidence seems to be making it clear that these OPERA results very likely
contained serious errors.

Equipment Problems

In February, there was some hubub over an announcement from CERN about which identified
two different defects "that could have an influence on its neutrino timing measurement,"
according to their press release.

An oscillator used to provide the time stamps for GPS synchronizations may have led
to an overestimation of the neutrino's time of flight.
A flaw in the optical fiber connector that brings the external GPS signal to the OPERA
master clock, which would have led to an underestimation of the time of flight of the
neutrinos.

Further tests would be necessary to determine the magnitude of these specific errors. Tests
with shorter pulses are slated for May.

ICARUS Weighs In

There's a second experimental apparatus, ICARUS, which is also in Gran Sasso, Italy, and it
also detects beams from CERN. It turns out they received some of the neutrinos from CERN
back in September as well and they've just completed an independent measurement of the
travel time. Their result shows that these same neutrinos traveled slower than the speed of
light, as predicted by relativity.

This, together with the acknowledgement of equipment failures at OPERA, makes it highly
likely that speed of light limit was not violated in the original OPERA experiment. As CERN
Research Director, Sergio Bertolucci, says in the press release:

"The evidence is beginning to point towards the OPERA result being an artifact of
measurement ..."

The Way Science Works

It's easy to think that this is an example of science getting something wrong, but I don't feel
that way. Results were found and the scientists released them to the wider scientific
community. Follow-up measurements were made, while the scientists involved in the original
experiment continued to check their own results. Science doesn't guarantee that results are
always correct, but only that if the scientific method is followed, then any errors will
eventually be uncovered ... and that's exactly what's happening here.

One could argue that they should have found the two equipment failures earlier, but
sometimes mistakes do happen and slip through the initial checks. This is why science can't
be performed in a standalone manner, but requires a community of independent researchers.
(One of the best explanations of this aspect of science is in this Scientific American blog post
by Janet Stemwedel, "Objectivity requires teamwork, but teamwork is hard." Check it out!)

Again, I think the Bertolucci makes an excellent point in the March 23 update to the press
release:

Whatever the result, the OPERA experiment has behaved with perfect scientific integrity in
opening their measurement to broad scrutiny, and inviting independent measurements. This is
how science works.

Evidence can be a tricky thing.

When someone conducts an experiment, they get a specific bit of data which represents the
result of that experiment performed at that time in exactly that way. These pieces of scientific
evidence then have to be assembled together with the existing theoretical framework

However, sometimes a single piece of evidence doesn't fit well in the framework. This
actually happens fairly frequently. The scientific framework is constantly being amended in
ways large and small to make it more comprehensive. The way that science adapts to this sort
of evidence is what makes it unique--and superior--to other ways of thinking about facts.
Science is designed to adapt to fit the facts as they are gained.

One of my favorite explanations of science is defined entirely in terms of this evidence-based


approach, in the words of physicist Richard Feynman:

Science is a way to teach how something gets to be known, what is not known, to what extent
things are known (for nothing is known absolutely), how to handle doubt and uncertainty,
what the rules of evidence are, how to think about things so that judgments can be made, how
to distinguish truth from fraud, and from show.

Neutrinos: A Case Study in Good Evidence

The most recent popular example of evidence that failed to match the existing theoretical
framework was the suggestion of faster-than-light neutrinos, which is more and more looking
like it was the result of equipment failure. In my recent blog post on the subject (Another
Lightspeed Limit Update), I suggested that despite being wrong, the OPERA scientists did
things the way they were supposed to. In the words of CERN Research Director Sergio
Bertolucci:

Whatever the result, the OPERA experiment has behaved with perfect scientific integrity in
opening their measurement to broad scrutiny, and inviting independent measurements. This is
how science works.
The scientists at the OPERA experiment were very tentative about their results. They checked
and re-checked their assumptions and equipment. They performed a more refined version of
the experiment, which seemed to confirm their initial evidence. Then they continued to check
their evidence, getting input from the broader body of scientists. When they discovered a
possible disparity, they announced it publicly.

Now another group, using their equipment to check the earlier OPERA results, is finding no
unusual results. It's very likely that this piece of data is in error, probably due to equipment
problems.

Standalone Evidence Isn't Proof

Still, even if we'd never identified the source of the error, eventually the scientific method
would have given us a verdict. Either we would get more results that showed neutrinos
traveling faster than the speed of light or we would not have. If we had failed to get
confirming evidence, we'd be in a curious position:

We'd have this one bizarre set of experimental results that didn't match anything else
performed before or after.

What is unfortunate is that a lot of people would put substantial weight on this one set of
experimental results. Within days of the experiment, I was receiving e-mails from
independent physicists (and physics cranks) explaining how their theory accounts for the
anomaly. Mind you, this was before we had any confirmation that the anomaly was even real.

Even if we never again see evidence of a neutrino moving faster than the speed of light, there
will be people who point at this one experiment for decades as evidence of their
pseudoscience claims. Why is this faulty? For one simple reason:

A single piece of data does not establish a law of nature.

Praying Away Tornadoes

It's easy to be misled by a powerful single example of something masquerading as valid


evidence of something.

This occurred to me recently when, in the wake of the tornadoes that swept the Midwest, my
wife was told by an acquaintance that there was video evidence that prayers to Jesus could
stop tornadoes. "Just look on YouTube," she said.

Being always willing to consider evidence, I did look on YouTube. Searching for "pray" and
"tornado" led me to one video, which gained popularity because it was also on CNN. (Here's a
link to the video on YouTube, as well.) It shows a very intimidating tornado coming toward a
woman's home in West Liberty, KY. She can be heard praying passionately in the background
for Jesus to turn away the tornado, and you can see someone's hand in the air, as if it's raised
to ward off the tornado.

Now, I want to be clear about something (even though I know it's futile, and I'm going to get
hate mail regardless):
I am not even going to attempt to convince anyone that prayers don't affect tornadoes.

If you believe that they do, it's a matter of faith, and it's beyond the scope of this blog post to
try to change that opinion. (I leave that to my colleague over at About.com Atheism ... an
excellent blog, by the way.)

What I do hope to convince you, however, is the following:

This video does not prove that praying affects tornadoes.

In fact, not only does the video not constitute proof ... it doesn't even constitute evidence that
praying affects tornadoes.

"But how can that be?" you may ask. The video shows a tornado which appears to be coming
toward the woman, she prays, and the tornado moves away. Surely, this is solid evidence that
the prayer affects the tornado.

It's not, for a variety of reasons. For the purpose of this analysis, I'll assume that the video
hasn't been doctored and that everything demonstrated is exactly as it happened. Remember,
the woman who directed me to this video made a very specific claim (or hypothesis, in
science jargon):

Hypothesis: God will intervene to protect a believer who prays.

This woman isn't alone. Pat Robertson, of the 700 Club, recently made a very similar claim.
While Pat Robertson hasn't, to my knowledge, specifically cited this video, I feel confident
that he'd claim it proved his case.

The problem with the video is that there's an inherent selection bias involved in it. The video
came to prominence precisely because the tornado doesn't strike the woman who is praying.
The only videos like this that will ever make it on television--or even into high YouTube
search rankings--are cases where a person is praying and the tornado spares them. If a tornado
is coming toward a person, they pray, and the tornado hits them, the video isn't particularly
interesting. It's just sad.

But this video gets circulated precisely because it feels like there's a connection between the
prayer and the tornado. Two things happen at a similar time and are linked together
meaningfully in our minds, in a case of what psychologist Carl Jung referred to
as synchronicity. To constitute actual physical evidence about the workings of the world, we
need some clear constraints on how the data is obtained, to remove the emotional aspect from
our analysis as much as possible.

Here are some examples of what we'd need before we could even call this evidence:

1. An analysis of the ways that tornadoes move which includes some sort of mechanism
for defining what is usual or unusual behavior for a tornadoes change in severity and
direction. (Because, frankly, I watched the video the first time with the sound off and
didn't even think it looked like the tornado did anything unusual.)
2. A controlled experiment would involve recording both non-believers and believers
(ideally of various faiths) praying to turn tornadoes away.
3. Another set of videos would include recordings of tornadoes in remote areas, ideally
where there is no chance of a believer being nearby to pray.
4. As part of a double-blind experiment, a group of weather experts would be asked to
watch the videos without sound and rate the behavior of the tornadoes based upon the
analysis rating mechanism defined in Step 1.
5. Statistical analysis of these ratings would determine whether there is any difference
between the prayed-for tornadoes and the tornadoes in remote areas.
6. Multiple analyses could compare the success rates of each individual faith category.

If there is a statistically-significant difference between the cases where believers pray for
tornadoes and the cases where no one is around, then that would be evidence that prayer
affects tornado movement.

This would still not constitute proof, mind you (because, as Feynman says, nothing is known
for certain), but it would be evidence that you could validly cite to support your prayer-
tornado link hypothesis.

Short of that, or a similar study, you've got no evidence ... you've only got a video of a lucky
woman praying, without any evidence of why she was lucky and other people were unlucky.
You can still believe that prayer has an impact, but you believe that without evidence. (That's
why it's called faith, after all.) Trying to claim that it's evidence or proof is misleading.

Evidence and Conclusions

And, of course, even if you had evidence then sometimes, as in the case of the OPERA
experiment, the evidence can lead you to a false conclusion. That's one of many reasons why
science is so great, because it has the means to sort out the false conclusions from the true
ones. It requires discipline, but the methodology will always win out in the end.

Do you have an example of when a single piece of evidence led you to a faulty conclusion?
How did you ultimately discover that it was faulty? Share it with us in the comments!

Definition: Stellar nucleosynthesis is the process by which elements are created within stars by
combining the protons and neutrons together from the nuclei of lighter elements. This is not the only
form of nucleosynthesis, however, but usually when scientists talk about nucleosynthesis, it's
shorthand for stellar nucleosynthesis.

History of the Theory

The idea that stars fuse together the atoms of light elements was first proposed in the 1920's,
by Einstein's strong supporter Arthur Eddington. However, the real credit for developing it
into a coherent theory is given to Fred Hoyle's work in the aftermath of World War II. Hoyle's
theory contained some significant differences from the current theory, most notably that he
did not believe in the big bang theory but believed instead that hydrogen was continually
being created within our universe. (This alternative theory was called a steady state theory
and fell out of favor when the cosmic microwave background radiation was detected.)

The Early Stars


The simplest type of atom in the universe is a hydrogen atom, which contains a single proton
in the nucleus (possibly with some neutrons hanging out, as well) with electrons circling that
nucleus. These protons are now believed to have formed when the incredibly high energy
quark-gluon plasma of the very early universe lost enough energy that quarks began bonding
together to form protons (and other hadrons, like neutrons). Hydrogen formed pretty much
instantly and even helium (with nuclei containing 2 protons) formed in relatively short order
(part of a process referred to as Big Bang nucleosynthesis).

As this hydrogen and helium began to form in the early universe, there were some areas
where it was more dense than in others. Gravity took over and eventually these atoms were
pulled together into massive clouds gas in the vastness of space. Once these clouds got large
enough they were drawn together by gravity with enough force to actually cause the atomic
nuclei to fuse together, in a process called nuclear fusion. The result of this fusion process is
that the two one-proton atoms have now formed a single two-proton atom. In other words,
two hydrogen atoms have begun one single helium atom. The energy released during this
process is what causes the sun (or any other star, for that matter) to burn.

It takes nearly 10 million years to burn through the helium and then things heat up and the
helium begins fusing together. Stellar nucleosynthesis continues to create heavier and heavier
elements, until you end up with iron ... but I'll leave it for the Krauss quote below to give you
the elegant details of this process.

Lawrence Krauss on Stellar Nucleosynthesis

Here are a couple of great quotes on the subject from a 2011 talk by astrophysicist Lawrence
Krauss. (Here's a link to the video.) They really describe the process of nucleosynthesis in a
far more beautiful manner than I can:

Every atom in your body came from a star that exploded. And, the atoms in your left hand
probably came from a different star than your right hand. It really is the most poetic thing I
know about physics: You are all stardust. You couldnt be here if stars hadnt exploded,
because the elements - the carbon, nitrogen, oxygen, iron, all the things that matter for
evolution and for life - werent created at the beginning of time. They were created in the
nuclear furnaces of stars, and the only way for them to get into your body is if those stars
were kind enough to explode.... The stars died so that you could be here today.

"All the hydrogen burns into helium in 10 million years.... All the helium burns to carbon in 1
million years.... Again, the star starts to collapse, because there's no more fuel. But then it
heats up and the carbon starts to burn ... to form neon and nitrogen. And all of the carbon in
the star burns in 100 thousand years.... And you get to oxygen. Oxygen ... burns to silicon in
10 thousand years. It's getting hotter and hotter and hotter. Less efficient. And then when all
the oxygen burns to silicon, you're in the last day of the star because, remarkably, it is so hot
at that point that all of the silicon in the center of the star, many thousands of times the mass
of the Earth, burns to form iron in one day.... Iron can't burn to form anything. Iron is the
most tightly-bound nucleus in nature. So once that's happened, there's no more fuel... When
all the silicon has burned to iron, suddenly the star realizes there's no place left to go and that
interior of the star, which has been held up by the pressure of nuclear burning, collapses.
That whole collapse happens in one second.... There's a shock wave and that shock wave ...
spews out all of the atoms that were created during the life history of a star. The carbon, the
nitrogen, the helium, the iron. And that's vitally important, because every atom in your body
was once inside a star that exploded.... The atoms in your left hand probably came from a
different star than in your right hand, because 200 million stars have exploded to make up the
atoms in your body."

Lawrence Krauss - Biographical Profile

General Information:

Birthdate: May 27, 1954


Birthplace: New York City (but moved to Toronto shortly after his birth and grew up there)

Education & Academic History:

1977 - Undergraduate degrees in mathematics and physics at Ottawa's Carleton


University, having earned first class honours.
1982 - Physics Ph.D. from Massachusetts Institute of Technology
1982 to 1985 - Member of the Harvard Society of Fellows
1985 to 1993 - Assistant and then associate professorship at Yale
1993 to 2005 - He went on to serve as the Ambrose Swasey Professor of Physics,
Professor of Astronomy, and Chairman of the department of Physics at Case Western
Reserve University
2008 to present (2011, at least) - Foundation Professor in the School of Earth and
Space Exploration and Physics Department, and Inaugural Director of the Origins
Project at Arizona State University

Scientific Background:

Lawrence Krauss is a cosmologist, meaning that he studies the origins of the universe. In fact,
he now runs the Origins Project at Arizona State University, which is a "transdisciplinary
initiative that nurtures research, energizes teaching, and builds partnerships, offering new
possibilities for exploring the most fundamental of questions who we are and where we came
from." In this capacity, together with his scientific writing, he seeks to spread knowledge
about the origins, evolution, and history of the universe to a general public.

Krauss and Scientific Controversies:

Lawrence Krauss is not one to shy away from a controversy, and working to find the origins
of the universe can certainly cause some controversies.

Religion and Science: Though he is an atheist, he often takes a devil's advocate-style position
which at times places him in amiable conflict with some more prominent atheists, such as
Richard Dawkins (who once said Krauss asked him an "I'm an atheist, but..." question, which
are far harder to tackle than the outright pro-religion questions). It seems that Krauss's goal is
to teach everyone - the faithful and the atheist - what science tells us about the universe and
its history, not particularly caring about changing their underlying belief structure.
String Theory Critic: Krauss is one of the most prominent and respected critics of string
theory. His 2005 book, Hiding in the Mirror details the history and allure of invoking extra
dimensions as a physical explanation, and calls into question whether this is really justified.

Science Writing:

Physicists write a lot of academic papers. In fact, according to his website, he has authored
"over 300 scientific publications." In addition, Krauss regularly writes editorials, essays, and
articles for popular magazines. He's written a number of books for popular audiences, making
him one of the most prominent voices in explaining the history of the universe to lay
audiences.

The Fifth Essence (1991)


Fear of Physics (1994)
The Physics of Star Trek (1995)
Beyond Star Trek (1997)
Quintessence (2001)
Atom (2002)
Hiding in the Mirror (2005)
Quantum Man: Richard Feynman's Life in Science (2010)
A Universe from Nothing: Why There Is Something Rather Than Nothing (Jan. 2012)

Awards:

Gravity Research Foundation First prize award (1984)


Presidential Investigator Award (1986)
American Association for the Advancement of Science's Award for the Public
Understanding of Science and Technology (2000)
Julius Edgar Lilienfeld Prize (2001)
Andrew Gemant Award (2001)
American Institute of Physics Science Writing Award (2002)
Oersted Medal (2003)
American Physical Society Joseph P. Burton Forum Award (2005)
Center for Inquiry World Congress Science in the Public Interest Award (2009)
Helen Sawyer Hogg Prize of the Royal Astronomical Society of Canada and the
Astronomical Society of Canada (2009)

Krauss Quotes:

"You're here because you want to escape reality and that's one of the things that science is
good for, to take us out at least a bit beyond our petty worries and concerns of the day."

"I want to try to ... take you beyond this brief moment in cosmic history, to realize that all of
this isn't important, that the really important stuff is far grander. The next time you're
depressed, you can think about the fact that we're really, in fact, completely insignificant....
You are much more insignificant than you thought."

"Rare events happen all the time because the universe is big and old."
Every atom in your body came from a star that exploded. And, the atoms in your left hand
probably came from a different star than your right hand. It really is the most poetic thing I
know about physics: You are all stardust. You couldnt be here if stars hadnt exploded,
because the elements - the carbon, nitrogen, oxygen, iron, all the things that matter for
evolution and for life - werent created at the beginning of time. They were created in the
nuclear furnaces of stars, and the only way for them to get into your body is if those stars
were kind enough to explode.... The stars died so that you could be here today.

The purpose of education is not to validate ignorance but to overcome it.

"All the hydrogen burns into helium in 10 million years.... All the helium burns to carbon in 1
million years.... Again, the star starts to collapse, because there's no more fuel. But then it
heats up and the carbon starts to burn ... to form neon and nitrogen. And all of the carbon in
the star burns in 100 thousand years.... And you get to oxygen. Oxygen ... burns to silicon in
10 thousand years. It's getting hotter and hotter and hotter. Less efficient. And then when all
the oxygen burns to silicon, you're in the last day of the star because, remarkably, it is so hot
at that point that all of the silicon in the center of the star, many thousands of times the mass
of the Earth, burns to form iron in one day.... Iron can't burn to form anything. Iron is the most
tightly-bound nucleus in nature. So once that's happened, there's no more fuel... When all the
silicon has burned to iron, suddenly the star realizes there's no place left to go and that interior
of the star, which has been held up by the pressure of nuclear burning, collapses. That whole
collapse happens in one second.... There's a shock wave and that shock wave ... spews out all
of the atoms that were created during the life history of a star. The carbon, the nitrogen, the
helium, the iron. And that's vitally important, because every atom in your body was once
inside a star that exploded.... The atoms in your left hand probably came from a different star
than in your right hand, because 200 million stars have exploded to make up the atoms in your
body."

Two-Dimensional Kinematics: Motion in a Plane


This article outlines the fundamental concepts necessary to analyze the motion of objects in two
dimensions, without regard to the forces that cause the acceleration involved. It assumes a
familiarity with one-dimensional kinematics, as it expands the same concepts into a two-dimensional
vector space.

First Step: Choosing Coordinates

Kinematics involves displacement, velocity, and acceleration which are all vector quantities that
require both a magnitude and direction. Therefore, to begin a problem in two-dimensional
kinematics you must first define the coordinate system you are using. Generally it will be in terms of
an x-axis and a y-axis, oriented so that the motion is in the positive direction, although there may be
some circumstances where this is not the best method.

In cases where gravity is being considered, it is customary to make the direction of gravity in
the negative-y direction. This is a convention that generally simplifies the problem, although it
would be possible to perform the calculations with a different orientation if you really desired.

Velocity Vector
The position vector r is a vector that goes from the origin of the coordinate system to a given point in
the system. The change in position (delta-r) is the difference between the start point (r1) to end point
(r2). We define the average velocity (vav) as:

vav = (r2 - r1) / (t2 - t1) = delta-r/delta-t

Taking the limit as delta-t approaches 0, we achieve the instantaneous velocity v. In calculus terms,
this is the derivative of r with respect to t, or dr/dt.

As the difference in time reduces, the start and end points move closer together. Since the
direction of r is the same direction as v, it becomes clear that the instantaneous velocity
vector at every point along the path is tangent to the path.

Velocity Components

The useful trait of vector quantities is that they can be broken up into their component vectors. The
derivative of a vector is the sum of its component derivatives, therefore:

vx = dx/dt
vy = dy/dt

The magnitude of the velocity vector is given by the Pythagorean Theorem in the form:

|v| = v = sqrt (vx2 + vy2)

The direction of v is oriented alpha degrees counter-clockwise from the x-component, and can be
calculated from the following equation:

tan alpha = vy / vx

Acceleration Vector

Acceleration is the change of velocity over a given period of time. Similar to the analysis above, we
find that it's delta-v/delta-t. The limit of this as delta-t approaches 0 yields the derivative of v with
respect to t.

In terms of components, the acceleration vector can be written as:

ax = dvx/dt
ay = dvy/dt

or

ax = d2x/dt2
ay = d2y/dt2

The magnitude and angle (denoted as beta to distinguish from alpha) of the net acceleration vector
are calculated with components in a fashion similar to those for velocity.
Working with Components

Frequently, two-dimensional kinematics involves breaking the relevant vectors into their x- and y-
components, then analyzing each of the components as if they were one-dimensional cases. Once
this analysis is complete, the components of velocity and/or acceleration are then combined back
together to obtain the resulting two-dimensional velocity and/or acceleration vectors.

Three-Dimensional Kinematics

The above equations can all be expanded for motion in three dimensions by adding a z-component to
the analysis. This is generally fairly intuitive, although some care must be made in making sure that
this is done in the proper format, especially in regards to calculating the vector's angle of orientation.

S-ar putea să vă placă și