Sunteți pe pagina 1din 78

CHAPTER 1

| || 
 |

A disturbance, produced by the acceleration or oscillation of an electric charge, which has


the characteristic time and spatial relations associated with progressive wave motion. A system of
electric and magnetic fields moves outward from a region where electric charges are accelerated,
such as an oscillating circuit or the target of an x-ray tube. The wide wavelength range over which
such waves are observed is shown by the electromagnetic spectrum. The term electric wave, or
hertzian wave, is often applied to electromagnetic waves in the radar and radio range.
Electromagnetic waves may be confined in tubes, such as wave guides, or guided by transmission
lines. They were predicted by J. C. Maxwell in 1864 and verified experimentally by H. Hertz in 1887.

p | || 
| 

The ! !! 
!
 is the range of all possible frequencies of electromagnetic
radiation.[1] The "electromagnetic spectrum" of an object is the characteristic distribution of
electromagnetic radiation emitted or absorbed by that particular object. The electromagnetic
spectrum extends from below frequencies used for modern radio to gamma radiation at the short-
wavelength end, covering wavelengths from thousands of kilometers down to a fraction of the size
of an atom. The long wavelength limit is the size of the universe itself, while it is thought that the
short wavelength limit is in the vicinity of the Planck length, although in principle the spectrum is
infinite and continuous.

The electromagnetic spectrum covers a wide range of wavelengths and photon energies. Light used
to "see" an object must have a wavelength about the same size as or smaller than the object. The
ALS generates light in the far ultraviolet and soft x-ray regions, which span the wavelengths suited to
studying molecules and atoms.

p | || 
|
|  

The ! !! 
 !
!
  is a second-order partial differential equation that
describes the propagation of electromagnetic waves through a medium or in a vacuum. The
homogeneous form of the equation, written in terms of either the electric field | or the magnetic
field , takes the form:

where is the speed of light in the medium, and  is the Laplace operator. In a vacuum,

=
0 =  ,7 ,458 meters per second, which is the speed of light in free space.[1] The
electromagnetic wave equation derives from Maxwell's equations. It should also be noted that in
most older literature,  is called the ï 
    or ï 

.

p |
  

|
| || 
|
|  

mp !  

!

Conservation of charge requires that the time rate of change of the total charge enclosed within a
volume Ô must equal the net current flowing into the surface enclosing the volume:

where ~ is the current density (in amperes per square meter) flowing through the surface and ʌ is the
charge density (in coulombs per cubic meter) at each point in the volume.

From the divergence theorem, this relationship can be converted from integral form to differential
form:

mp !
 


 

! 
! 

In its original form, Ampère's circuital law relates the magnetic field  to the current density ~:

where is an open surface terminated in the curve . This integral form can be converted to
differential form, using Stokes' theorem:
mp  !
!!!
!
 



!


!  

!

Taking the divergence of both sides of Ampère's circuital law gives:

The divergence of the curl of any vector field, including the magnetic field , is always equal to zero:

Combining these two equations implies that

Because is nonzero constant, it follows that

However, the law of conservation of charge tells that

Hence, as in the case of Kirchhoff's circuit laws, Ampère's circuital law would appear only to hold
in situations involving constant charge density. This would rule out the situation that occurs in the
plates of a charging or a discharging capacitor.

mp ! 
! 

!
 



Maxwell conceived of displacement current in connection with linear polarization of a dielectric


medium. The concept has since been extended to apply to the vacuum. The justification of this
virtual extension of displacement current is as follows:

Gauss's law in integral form states:

where is a closed surface enclosing the volume Ô. This integral form can be converted to
differential form using the divergence theorem:

Taking the time derivative of both sides and reversing the order of differentiation on the left-hand
side gives:
This last result, along with Ampère's circuital law and the conservation of charge equation,
suggests that there are actually  origins of the magnetic field: the current density ~, as Ampère
had already established, and the so-called   !!

!:

So the corrected form of Ampère's circuital law becomes:

mp ! 
! 




! !! 
 !

A postcard from Maxwell to Peter Tait.

In his 1864 paper entitled A Dynamical Theory of the Electromagnetic Field, Maxwell utilized
the correction to Ampère's circuital law that he had made in part III of his 1861 paper On Physical
Lines of Force. In PART VI of his 1864 paper which is entitled 'ELECTROMAGNETIC THEORY OF
LIGHT'[], Maxwell combined displacement current with some of the other equations of
electromagnetism and he obtained a wave equation with a speed equal to the speed of light. He
commented:

Th ïh   ï  hh h  ï  ï  


 h
ï   
   h  h    
ï 
    
     h h h
 

 
ï 
  

Maxwell's derivation of the electromagnetic wave equation has been replaced in modern
physics by a much less cumbersome method involving combining the corrected version of Ampère's
circuital law with Faraday's law of induction.
To obtain the e ectromagnetic wave e ation in a vac m using the modern method, we
begin with the modern 'Heaviside' form of Maxwe 's e uations  In a vacuum and charge free s ace,
these e uations are 

Taking the curl of the curl e uations gives 

By using the vector identity

where is any vector function of s ace, it turns into the wave e uations 

where m/s is the s eed of light in free s ace 

mp       


 


 


 

Time dilation in transversal motion. The re uirement that the s eed of light is constant in every
inertial reference frame leads to the theory of S ecial Relativity
These relativistic equations can be written in contravariant form as

where the electromagnetic four-potential is

with the Lorenz gauge condition:

Where

is the d'Alembertian operator. (The square box is not a


typographical error; it is the correct symbol for this operator.)

mp !!

 !
!
 


 !
! !

The electromagnetic wave equation is modified in two ways, the derivative is replaced with the
covariant derivative and a new term that depends on the curvature appears.

where is the Ricci curvature tensor and the semicolon indicates covariant differentiation.

The generalization of the Lorenz gauge condition in curved spacetime is assumed:

mp !!

! !! 
 !
!
 

Localized time-varying charge and current densities can act as sources of electromagnetic waves
in a vacuum. Maxwell's equations can be written in the form of a wave equation with sources. The
addition of sources to the wave equations makes the partial differential equations inhomogeneous.


 

!
!!

! !! 
 !
!
 

The general solution to the electromagnetic wave equation is a linear superposition of


waves of the form

and
for virtually  well-behaved function  of dimensionless argument ʔ, where

is the angular frequency (in radians per second), and

is the wave vector (in radians per meter).

Although the function  can be and often is a monochromatic sine wave, it does not have to
be sinusoidal, or even periodic. In practice,  cannot have infinite periodicity because any real
electromagnetic wave must always have a finite extent in time and space. As a result, and based on
the theory of Fourier decomposition, a real wave must consist of the superposition of an infinite set
of sinusoidal frequencies.

In addition, for a valid solution, the wave vector and the angular frequency are not independent;
they must adhere to the dispersion relation:

where  is the wavenumber and ʄ is the wavelength.

mp  
 
 
!!

The simplest set of solutions to the wave equation result from assuming sinusoidal
waveforms of a single frequency in separable form:

where

ëp is the imaginary unit,


ëp 
!


!
!

 
!
!
ëp is the !
!

!

ëp is Euler's formula.

mp !
 !

 

›  
        h
ï 
  

Consider a plane defined by a unit normal vector

Then planar traveling wave solutions of the wave equations are

and
where

is the position vector (in meters .

These solutions represent planar waves traveling in the direction of the normal vector . If we
define the z direction as the direction of and the x direction as the direction of , then by
Faraday's Law the magnetic field lies in the y direction and is related to the electric field by the
relation . Because the divergence of the electric and magnetic fields are zero, there are
no fields in the direction of propagation.

This solution is the linearly polarized solution of the wave e uations. There are also circularly
polarized solutions in which the fields rotate about the normal vector.

mp      

Because of the linearity of Maxwell's e uations in a vacuum, solutions can be decomposed into a
superposition of sinusoids. This is the basis for the Fourier transform method for the solution of
differential e uations. The sinusoidal solution to the electromagnetic wave e uation takes the form

Electromagnetic spectrum illustration.

and

where

is time (in seconds ,

is the angular fre uency (in radians per second),

is the wave vector (in radians per meter), and


is the phase angle (in radians).

The wave vector is related to the angular frequency by

where  is the wavenumber and ʄ is the wavelength.

The electromagnetic spectrum is a plot of the field magnitudes (or energies) as a function of
wavelength.

mp !

 

Spherically symmetric and cylindrically symmetric analytic solutions to the electromagnetic wave
equations are also possible.

In cylindrical coordinates the wave equation can be written as follows:

and

p    
|

Longitudinal waves are waves that have the same direction of oscillation or vibration along their
direction of travel, which means that the oscillation of the medium (particle) is in the same direction
or opposite direction as the motion of the wave. Mechanical longitudinal waves have been also
referred to as compressional waves or compression waves. (Examples of longitudinal waves include
sound waves (alternation in pressure, particle displacement, or particle velocity propagated in an
elastic material) and seismic P-waves (created by earthquakes and explosions).

p  
|

In the case of longitudinal harmonic sound waves, the frequency and wavelength can be
described with the formula

where:
ëp  is the displacement of the point on the traveling sound wave;
ëp is the distance the point has traveled from the wave's source;
ëp  is the time elapsed;
ëp 0 is the amplitude of the oscillations,
ëp
is the speed of the wave; and
ëp J is the angular frequency of the wave.

The quantity /
is the time that the wave takes to travel the distance .

The ordinary frequency termed as , in hertz, of the wave can be found using

For sound waves, the amplitude of the wave is the difference between the pressure of the
undisturbed air and the maximum pressure caused by the wave. Sound's propagation speed
depends on the type, temperature and pressure of the medium through which it propagates.

p | |
|

In an elastic medium with rigidity, a harmonic pressure wave oscillation has the form,

where:

ëp 0 is the amplitude of displacement,


ëp  is the wavenumber,
ëp is distance along the axis of propagation,
ëp J is angular frequency,
ëp  is time, and
ëp 6 is phase difference.

The force acting to return the medium to its original position is provided by the medium's bulk
modulus.

p 









mp  



 
! !

In ancient India, the Hindu schools of Samkhya and Vaisheshika, from around the 6thʹ5th
century BC, developed theories on light. According to the Samkhya school, light is one of the five
fundamental "subtle" elements ( ï  ) out of which emerge the gross elements. The atomicity
of these elements is not specifically mentioned and it appears that they were actually taken to be
continuous. On the other hand, the Vaisheshika school gives an atomic theory of the physical world
on the non-atomic ground of ether, space and time. (See    ï ï.) The basic atoms are those
of earth (h), water ( ), fire ( ), and air (  ), that should not be confused with the
ordinary meaning of these terms. These atoms are taken to form binary molecules that combine
further to form larger molecules. Motion is defined in terms of the movement of the physical atoms
and it appears that it is taken to be non-instantaneous. Light rays are taken to be a stream of high
velocity of  (fire) atoms. The particles of light can exhibit different characteristics depending on
the speed and the arrangements of the  atoms. Around the first century BC, the Ô h   
refers to sunlight as the "the seven rays of the sun".

Later in 4 , Aryabhata, who proposed a heliocentric solar system of gravitation in his


 h  , wrote that the planets and the Moon do not have their own light but reflect the light of
the Sun.

The Indian Buddhists, such as Dignāga in the 5th century and Dharmakirti in the 7th century,
developed a type of atomism that is a philosophy about reality being composed of atomic entities
that are momentary flashes of light or energy. They viewed light as being an atomic entity equivalent
to energy, similar to the modern concept of photons, though they also viewed all matter as being
composed of these light/energy particles. It is written in the Rigveda that light consists of three
primary colors. "Mixing the three colours, ye have produced all the objects of sight!"[5]

mp !!

! !  
! !

In the fifth century BC, Empedocles postulated that everything was composed of four elements;
fire, air, earth and water. He believed that Aphrodite made the human eye out of the four elements
and that she lit the fire in the eye which shone out from the eye making sight possible. If this were
true, then one could see during the night just as well as during the day, so Empedocles postulated an
interaction between rays from the eyes and rays from a source such as the sun.

In about 300 BC, Euclid wrote „


, in which he studied the properties of light. Euclid
postulated that light travelled in straight lines and he described the laws of reflection and studied
them mathematically. He questioned that sight is the result of a beam from the eye, for he asks how
one sees the stars immediately, if one closes one's eyes, then opens them at night. Of course if the
beam from the eye travels infinitely fast this is not a problem.

In 55 BC, Lucretius, a Roman who carried on the ideas of earlier Greek atomists, wrote:

"Th h  h   h ! h  


ï    ï  ï  h
h h h 
h  ï hh
 h 
 h 
ï  h
h" ʹ „h  h" 

Despite being similar to later particle theories, Lucretius's views were not generally accepted
and light was still theorized as emanating from the eye.

Ptolemy (c. nd century) wrote about the refraction of light in his book „
, and developed a
theory of vision whereby objects are seen by rays of light emanating from the eyes.[6]

mp  
!

Ibn al-Haytham proved that light travels in straight lines through optical experiments.
The Muslim scientist, Ibn al-Haytham ( 65ʹ1040), known as h
 or h # in the West,
developed a broad theory of vision based on geometry and anatomy in his $„
(101). Ibn
al-Haytham provided the first correct description of how vision works,[7] explaining that it is not due
to objects being seen by rays of light emanating from the eyes, as Euclid and Ptolemy had assumed,
but due to light rays entering the eyes.[8] Ibn al-Haytham postulated that every point on an
illuminated surface radiates light rays in all directions, but that only one ray from each point can be
seen: the ray that strikes the eye perpendicularly. The other rays strike at different angles and are
not seen. He conducted experiments to support his argument, which included the development of
apparatus such as the pinhole camera and camera obscura, which produces an inverted image.[ ]
Alhacen held light rays to be streams of minute particles that "lack all sensible qualities except
energy"[10] and travel at a finite speed.[11][1][13] He improved Ptolemy's theory of the refraction of
light, and went on to describe the laws of refraction, though this was earlier discovered by Ibn Sahl
(c. 40-1000) several decades before him.[14][15]

A page of Ibn Sahl's manuscript showing his discovery of the law of refraction (Snell's law).

He also carried out the first experiments on the dispersion of light into its constituent colors.
His major work %  ›  # ($  „
) was translated into Latin in the Middle Ages, as
well his book dealing with the colors of sunset. He dealt at length with the theory of various physical
phenomena like shadows, eclipses, the rainbow. He also attempted to explain binocular vision, and
gave an explanation of the apparent increase in size of the sun and the moon when near the horizon,
known as the moon illusion. Because of his extensive experimental research on optics, Ibn al-
Haytham is considered the "father of modern optics".[16]

Ibn al-Haytham developed the camera obscura and pinhole camera for his experiments on light.
Ibn al-Haytham also correctly argued that we see objects because the sun's rays of light,
which he believed to be streams of tiny energy particles[10] travelling in straight lines, are reflected
from objects into our eyes.[11] He understood that light must travel at a large but finite
velocity,[11][1][13] and that refraction is caused by the velocity being different in different
substances.[11] He also studied spherical and parabolic mirrors, and understood how refraction by a
lens will allow images to be focused and magnification to take place. He understood mathematically
why a spherical mirror produces aberration.

Ibn al-Haytham's optical model of light was "the first comprehensive and systematic
alternative to Greek optical theories."[17] He initiated a revolution in optics and visual
perception,[18][1 ][0][1][][3] also known as the 'Optical Revolution',[4] and laid the foundations for a
physical optics.[5][6] As such, he is often regarded as the "father of modern optics."[5]

Avicenna ( 80ʹ1037) agreed that the speed of light is finite, as he "observed that if the
perception of light is due to the emission of some sort of particles by a luminous source, the speed
of light must be finite."[7] Abū Rayhān al-Bīrūnī ( 73ʹ1048) also agreed that light has a finite speed,
and he was the first to discover that the speed of light is much faster than the speed of sound.[8] In
the late 13th and early 14th centuries, Qutb al-Din al-Shirazi (136ʹ1311) and his student Kamāl al-
Dīn al-Fārisī (160ʹ130) continued the work of Ibn al-Haytham, and they were the first to give the
correct explanations for the rainbow phenomenon.[ ]

René Descartes (15 6ʹ1650) held that light was a mechanical property of the luminous
body, rejecting the "forms" of Ibn al-Haytham and Whitelo as well as the "species" of Bacon,
Grosseteste, and Kepler.[30] In 1637 he published a theory of the refraction of light that assumed,
incorrectly, that light travelled faster in a denser medium than in a less dense medium. Descartes
arrived at this conclusion by analogy with the behaviour of sound waves.[
    ] Although
Descartes was incorrect about the relative speeds, he was correct in assuming that light behaved like
a wave and in concluding that refraction could be explained by the speed of light in different media.

Descartes is not the first to use the mechanical analogies but because he clearly asserts that light
is only a mechanical property of the luminous body and the transmitting medium, Descartes' theory
of light is regarded as start of modern physical optics.[31]

mp   !
!

Ibn al-Haytham (Alhazen, 65ʹ1040) proposed a particle theory of light in his $„

(101). He held light rays to be streams of minute energy particles[10] that travel in straight lines at a
finite speed.[11][1][13] He states in his optics that "the smallest parts of light," as he calls them, "retain
only properties that can be treated by geometry and verified by experiment; they lack all sensible
qualities except energy."[10] Avicenna ( 80ʹ1037) also proposed that "the perception of light is due
to the emission of some sort of particles by a luminous source".[7]

Pierre Gassendi (15 ʹ1655), an atomist, proposed a particle theory of light which was
published posthumously in the 1660s. Isaac Newton studied Gassendi's work at an early age, and
preferred his view to Descartes' theory of the  ï. He stated in his &h  'h of 1675
that light was composed of corpuscles (particles of matter) which were emitted in all directions from
a source. One of Newton's arguments against the wave nature of light was that waves were known
to bend around obstacles, while light travelled only in straight lines. He did, however, explain the
phenomenon of the diffraction of light (which had been observed by Francesco Grimaldi) by allowing
that a light particle could create a localised wave in the aether.
÷ewton's theory could be used to predict the reflection of light, but could only explain refraction
by incorrectly assuming that light accelerated upon entering a denser medium because the
 
gravitational pull was greater. ÷ewton published the final version of his theory in his „    of
1704. His reputation helped the particle theory of light to hold sway during the 18th century. The
particle theory of light led Laplace to argue that a body could be so massive that light could not
escape from it. In other words it would become what is now called a black hole. Laplace withdrew
his suggestion when the wave theory of light was firmly established. A translation of his essay

appears in (he l  e sc le str ct re  s cetime, by Stephen Hawking and George F. R. Ellis.

mp   

In the 1660s, Robert Hooke published a wave theory of light. Christiaan Huygens worked out his
own wave theory of light in 1678, and published it in his (re tise  liht in 16 0. He proposed that
light was emitted in all directions as a series of waves in a medium called the L miifer s ether. As
waves are not affected by gravity, it was assumed that they slowed down upon entering a denser
medium.

Thomas Young's sketch of the two-slit experiment showing the diffraction of light. Young's
experiments supported the theory that light consists of waves.

The wave theory predicted that light waves could interfere with each other like sound waves
(as noted around 1800 by Thomas Young), and that light could be polarized, if it were a transverse
wave. Young showed by means of a diffraction experiment that light behaved as waves. He also
proposed that different colors were caused by different wavelengths of light, and explained color
vision in terms of three-colored receptors in the eye.

Another supporter of the wave theory was Leonhard Euler. He argued in N theri l cis
et clr m (1746) that diffraction could more easily be explained by a wave theory.

Later, Augustin- ean Fresnel independently worked out his own wave theory of light, and
presented it to the Académie des Sciences in 1817. Simeon Denis Poisson added to Fresnel's
mathematical work to produce a convincing argument in favour of the wave theory, helping to
overturn ÷ewton's corpuscular theory. By the year 181, Fresnel was able to show via mathematical
methods that polarization could be explained only by the wave theory of light and only if light was
entirely transverse, with no longitudinal vibration whatsoever.

The weakness of the wave theory was that light waves, like sound waves, would need a
medium for transmission. A hypothetical substance called the luminiferous aether was proposed,
but its existence was cast into strong doubt in the late nineteenth century by the Michelson-Morley
experiment.

÷ewton's corpuscular theory implied that light would travel faster in a denser medium, while the
wave theory of Huygens and others implied the opposite. At that time, the speed of light could not
be measured accurately enough to decide which theory was correct. The first to make a sufficiently
accurate measurement was Léon Foucault, in 1850.[3] His result supported the wave theory, and the
classical particle theory was finally abandoned.
!
p      
"#


   is the mirror-like reflection of light (or sometimes other kinds of wave) from
a surface, in which light from a single incoming direction (a ray) is reflected into a single outgoing
direction. Such behavior is described by the      , which states that the direction of
incoming light (the incident ray), and the direction of outgoing light reflected (the reflected ray)
make the same angle with respect to the surface normal, thus the le f ici ece e ls the
le f reflecti mathematically this is ɽi = ɽr. A second defining characteristic of specular
reflection is that the incident, normal, and reflected directions are coplanar. This behavior was first
discovered through careful observation and measurement byHero of Alexandria (c. 10ʹ70 AD).

Diagram of specular reflection Reflections on still water are an example of


specular reflection

Specular reflection is distinct from diffuse reflection, where incoming light is reflected in a
broad range of directions. The most familiar example of the distinction between specular and diffuse
reflection would be glossy and matte paints. While both exhibit a combination of specular and
diffuse reflection, matte paints have a higher proportion of diffuse reflection and glossy paints have
a greater proportion of specular reflection. Very highly polished surfaces, such as high quality
mirrors, can exhibit almost complete specular reflection.

Even when a surface exhibits only specular reflection with no diffuse reflection, not all of the
light is necessarily reflected. Some of the light may be absorbed by the materials. Additionally,
depending on the type of material behind the surface, some of the light may be transmitted through
the surface. For most interfaces between materials, the fraction of the light that is reflected
increases with increasing angle of incidence ɽi. If the light is propagating in a material with a higher
index of refraction than the material whose surface it strikes, then total internal reflection may occur
(if the angle of incidence is greater than a certain critical angle). Specular reflection from a dielectric
such as water can affect polarization and at Brewster's angle reflected light is completely linearly
polarized parallel to the interface.

The law of reflection arises from diffraction of a plane wave (with small wavelength) on a flat
boundary $ when the boundary size is much larger than the wavelength then electrons of the
boundary are seen oscillating exactly in phase only from one directionͶthe specular direction. If a
mirror becomes very small (comparable to the wavelength), the law of reflection no longer holds
and the behaviour of light is more complicated.
Waves other than visible light can also exhibit specular reflection. This includes other
electromagnetic waves, as well as non-electromagnetic waves. Examples include acoustic mirrors,
which reflect sound, and atomic mirrors, which reflect neutral atoms. For the efficient reflection of
atoms from a solid-state mirror, very cold atoms and/or grazing incidence are used in order to
provide significant quantum reflection; ridged mirrors are used to enhance the specular reflection of
atoms.

The consideration of specular reflection in dentistry helps improve the aesthetic quality of
an inlay, onlay or filling, allowing the appearance of the material 'flowing' in with the natural
dentition.Specular reflection can be most accurately measured using a glossmeter. The
measurement is based on the refractive index of an object. The standard units for measurement are
"gloss units".

 
 

 !

 !
 ! 

!

 !

 ! 

!
!
 
! !!

 ! 


 
!

!
 
!
!
!!
"!

where is a scalar obtained with the dot product. (Different authors may define the incident
and reflection directions with different signs than above.) Assuming these Euclidean vectors are
represented in column form, the equation can be equivalently expressed as a matrix-vector
multiplication:

where is the so-called Householder transformation matrix, defined as:

T denotes transposition and is the identity matrix. If only an unnormalized surface normal vector
is available, a square root otherwise required to obtain its norm or length,
, can be avoided as follows:

or, in terms of :

where is again a scalar and is a matrix.

%
p  
&'

Refraction is the bending of light rays when passing from one transparent material to another. It
is described by Snell's Law:

where ɽ1 is the angle between the ray and the normal in the first medium, ɽ is the angle between
the ray and the normal in the second medium, and n1 and n are the indices of refraction,  = 1 in a
vacuum and  > 1 in a transparent substance. When a beam of light crosses the boundary between a
vacuum and another medium, or between two different media, the wavelength of the light changes,
but the frequency remains constant. If the beam of light is not orthogonal (or rather normal) to the
boundary, the change in wavelength results in a change in the direction of the beam. This change of
direction is known as refraction. The refractive quality of lenses is frequently used to manipulate
light in order to change the apparent size of images. Magnifying glasses, spectacles, contact lenses,
microscopes and refracting telescopes are all examples of this manipulation. Light refraction is the
main basis of measurement for gloss. Gloss is measured using a glossmeter.

Refraction of light at the interface between


two media of different refractive indices, with
n > n1. Since the phase velocity is lower in the
The straw appears to be broken, due to second medium (v < v1), the angle of
refraction of light as it emerges into the air. refraction ɽ is less than the angle of
incidence ɽ1 that is, the ray in the higher-
index medium is closer to the normal.
(
p    

In 1845, Michael Faraday discovered that the plane of polarization of linearly polarized light is
rotated when the light rays travel along the magnetic field direction in the presence of a transparent
dielectric, an effect now known as Faraday rotation.[33] This was the first evidence that light was
related to electromagnetism. In 1846 he speculated that light might be some form of disturbance
propagating along magnetic field lines.[34] Faraday proposed in 1847 that light was a high-frequency
electromagnetic vibration, which could propagate even in the absence of a medium such as the
ether.

Faraday's work inspired James Clerk Maxwell to study electromagnetic radiation and light.
Maxwell discovered that self-propagating electromagnetic waves would travel through space at a
constant speed, which happened to be equal to the previously measured speed of light. From this,
Maxwell concluded that light was a form of electromagnetic radiation: he first stated this result in
186 in „ Physic l Lies f Frce. In 1873, he published A (re tise  Electricity  M etism,
which contained a full mathematical description of the behaviour of electric and magnetic fields, still
known as Maxwell's equations. Soon after, Heinrich Hertz confirmed Maxwell's theory
experimentally by generating and detecting radio waves in the laboratory, and demonstrating that
these waves behaved exactly like visible light, exhibiting properties such as reflection, refraction,
diffraction, and interference. Maxwell's theory and Hertz's experiments led directly to the
development of modern radio, radar, television, electromagnetic imaging, and wireless
communications.

p |
| 
|#

|   #

The wave theory was wildly successful in explaining nearly all optical and electromagnetic
phenomena, and was a great triumph of nineteenth century physics. By the late nineteenth century,
however, a handful of experimental anomalies remained that could not be explained by or were in
direct conflict with the wave theory. One of these anomalies involved a controversy over the speed
of light. The constant speed of light predicted by Maxwell's equations and confirmed by the
Michelson-Morley experiment contradicted the mechanical laws of motion that had been
unchallenged since the time of Galileo, which stated that all speeds were relative to the speed of the
observer. In 1 05, Albert Einstein resolved this paradox by revising the Galilean model of space and
time to account for the constancy of the speed of light. Einstein formulated his ideas in his special
theory of relativity, which advanced humankind's understanding of space and time. Einstein also
demonstrated a previously unknown fundamental equivalence between energy and mass with his
famous equation

where | is energy, ï is, depending on the context, the rest mass or the relativistic mass, and
is the
speed of light in a vacuum.

p   |
|#
| |

Another experimental anomaly was the photoelectric effect, by which light striking a metal
surface ejected electrons from the surface, causing an electric current to flow across an applied
voltage. Experimental measurements demonstrated that the energy of individual ejected electrons
was proportional to the  
, rather than the  , of the light. Furthermore, below a
certain minimum frequency, which depended on the particular metal, no current would flow
regardless of the intensity. These observations appeared to contradict the wave theory, and for
years physicists tried in vain to find an explanation. In 1 05, Einstein solved this puzzle as well, this
time by resurrecting the particle theory of light to explain the observed effect. Because of the
preponderance of evidence in favor of the wave theory, however, Einstein's ideas were met initially
by great skepticism among established physicists. But eventually Einstein's explanation of the
photoelectric effect would triumph, and it ultimately formed the basis for waveʹparticle duality and
much of quantum mechanics.

p   
|#

A third anomaly that arose in the late 1 th century involved a contradiction between the wave
theory of light and measurements of the electromagnetic spectrum emitted by thermal radiators, or
so-called black bodies. Physicists struggled with this problem, which later became known as the
ultraviolet catastrophe, unsuccessfully for many years. In 1 00, Max Planck developed a new theory
of black-body radiation that explained the observed spectrum correctly. Planck's theory was based
on the idea that black bodies emit light (and other electromagnetic radiation) only as discrete
bundles or packets of energy. These packets were called quanta, and the particle of light was given
the name photon, to correspond with other particles being described around this time, such as the
electron and proton. A photon has an energy, |, proportional to its frequency, , by
where h is Planck's constant, ʄ is the wavelength and
is the speed of light. Likewise, the
momentum  of a photon is also proportional to its frequency and inversely proportional to its
wavelength:

As it originally stood, this theory did not explain the simultaneous wave- and particle-like natures of
light, though Planck would later work on theories that did. In 1 18, Planck received the Nobel Prize
in Physics for his part in the founding of quantum theory.

p |$   |
  #

The modern theory that explains the nature of light includes the notion of waveʹparticle duality,
described by Albert Einstein in the early 1 00s, based on his study of the photoelectric effect and
Planck's results. Einstein asserted that the energy of a photon is proportional to its frequency. More
generally, the theory states that everything has both a particle nature and a wave nature, and
various experiments can be done to bring out one or the other. The particle nature is more easily
discerned if an object has a large mass, and it was not until a bold proposition by Louis de Broglie in
1 4 that the scientific community realized that electrons also exhibited waveʹparticle duality. The
wave nature of electrons was experimentally demonstrated by Davisson and Germer in 1 7.
Einstein received the Nobel Prize in 1 1 for his work with the waveʹparticle duality on photons
(especially explaining the photoelectric effect thereby), and de Broglie followed in 1  for his
extension to other particles.

p   
| |# 

The quantum mechanical theory of light and electromagnetic radiation continued to evolve
through the 1 0s and 1 30's, and culminated with the development during the 1 40s of the theory
of quantum electrodynamics, or QED. This so-called quantum field theory is among the most
comprehensive and experimentally successful theories ever formulated to explain a set of natural
phenomena. QED was developed primarily by physicists Richard Feynman, Freeman Dyson, Julian
Schwinger, and Shin-Ichiro Tomonaga. Feynman, Schwinger, and Tomonaga shared the 1 65 Nobel
Prize in Physics for their contributions.
p 
| |

Light pushes on objects in its path, just as the wind would do. This pressure is most easily
explainable in particle theory: photons hit and transfer their momentum. Light pressure can
cause asteroids to spin faster,[35] acting on their irregular shapes as on the vanes of a windmill.
The possibility to make solar sails that would accelerate spaceships in space is also under
investigation. Although the motion of the Crookes radiometer was originally attributed to light
pressure, this interpretation is incorrect; the characteristic Crookes rotation is the result of a
partial vacuum.[38] This should not be confused with the Nichols radiometer, in which the motion
 directly caused by light pressure.

p    #

An intricate display for the feast of St. Thomas at Kallara Pazhayapalli in Kottayam, Kerala, India
dramatically illustrates the importance of light in religion.

The sensory perception of light plays a central role in spirituality (vision, enlightenment,
darshan, Tabor Light). The presence of light as opposed to its absence (darkness) is a common
metaphor of good and evil, knowledge and ignorance, and similar concepts. This idea is prevalent in
both Eastern and Western spirituality.
Ô 

  |
# 

Nuclear physics is the field of physics that studies the building blocks and interactions of atomic
nuclei. The most commonly known applications of nuclear physics are nuclear power and nuclear
weapons, but the research has provided wider applications, including those in medicine (nuclear
medicine, magnetic resonance imaging), materials engineering (ion implantation) and archaeology
(radiocarbon dating).

The field of particle physics evolved out of nuclear physics and, for this reason, has been
included under the same term in earlier times.

p !

!

!

 !

!

 !
!

There are 80 elements which have at least one stable isotope (defined as isotopes never
observed to decay), and in total there are about 56 such stable isotopes. However, there are
thousands more well-characterized isotopes which are unstable. These radioisotopes may be
unstable and decay in all timescales ranging from fractions of a second to weeks, years, or many
billions of years.

For example, if a nucleus has too few or too many neutrons it may be unstable, and will decay
after some period of time. For example, in a process called beta decay a nitrogen-16 atom (7
protons, neutrons) is converted to an oxygen-16 atom (8 protons, 8 neutrons) within a few
seconds of being created. In this decay a neutron in the nitrogen nucleus is turned into a proton and
an electron and antineutrino, by the weak nuclear force. The element is transmuted to another
element in the process, because while it previously had seven protons (which makes it nitrogen) it
now has eight (which makes it oxygen).

In alpha decay the radioactive element decays by emitting a helium nucleus ( protons and 
neutrons), giving another element, plus helium-4. In many cases this process continues through
several steps of this kind, including other types of decays, until a stable element is formed.

In gamma decay, a nucleus decays from an excited state into a lower state by emitting a gamma
ray. It is then stable. The element is not changed in the process.

Other more exotic decays are possible (see the main article). For example, in internal conversion
decay, the energy from an excited nucleus may be used to eject one of the inner orbital electrons
from the atom, in a process which produces high speed electrons, but is not beta decay, and (unlike
beta decay) does not transmute one element to another.

p 
 !

 

When two low mass nuclei come into very close contact with each other it is possible for the
strong force to fuse the two together. It takes a great deal of energy to push the nuclei close enough
together for the strong or nuclear forces to have an effect, so the process of nuclear fusion can only
take place at very high temperatures or high densities. Once the nuclei are close enough together
the strong force overcomes their electromagnetic repulsion and squishes them into a new nucleus. A
very large amount of energy is released when light nuclei fuse together because the binding energy
per nucleon increases with mass number up until nickel-6. Stars like our sun are powered by the
fusion of four protons into a helium nucleus, two positrons, and two neutrinos. The 

fusion of hydrogen into helium is known as thermonuclear runaway. Research to find an
economically viable method of using energy from a
 fusion reaction is currently being
undertaken by various research establishments (see JET and ITER).

p 
 !
 

For nuclei heavier than nickel-6 the binding energy per nucleon decreases with the mass
number. It is therefore possible for energy to be released if a heavy nucleus breaks apart into two
lighter ones. This splitting of atoms is known as nuclear fission.

The process of alpha decay may be thought of as a special type of spontaneous nuclear fission.
This process produces a highly asymmetrical fission because the four particles which make up the
alpha particle are especially tightly bound to each other, making production of this nucleus in fission
particularly likely.

For certain of the heaviest nuclei which produce neutrons on fission, and which also easily
absorb neutrons to initiate fission, a self-igniting type of neutron-initiated fission can be obtained, in
a so-called chain reaction. (Chain reactions were known in chemistry before physics, and in fact
many familiar processes like fires and chemical explosions are chemical chain reactions.) The fission
or "nuclear" chain-reaction, using fission-produced neutrons, is the source of energy for nuclear
power plants and fission type nuclear bombs such as the two that the United States used against
Hiroshima and Nagasaki at the end of World War II. Heavy nuclei such as uranium and thorium may
undergo spontaneous fission, but they are much more likely to undergo decay by alpha decay.

For a neutron-initiated chain-reaction to occur, there must be a critical mass of the element
present in a certain space under certain conditions (these conditions slow and conserve neutrons for
the reactions). There is one known example of a natural nuclear fission reactor, which was active in
two regions of Oklo, Gabon, Africa, over 1.5 billion years ago. Measurements of natural neutrino
emission have demonstrated that around half of the heat emanating from the Earth's core results
from radioactive decay. However, it is not known if any of this results from fission chain-reactions.

p   
|#

mp  
|#

Alpha decay is a type of radioactive decay in which an atomic nucleus emits an alpha particle,
and thereby transforms (or 'decays') into an atom with a mass number 4 less and atomic number 
less. For example:

[1]
38 U ї 34 0Th + 4He+

although this is typically written as:

38 34
U ї Th + ɲ
Alpha decay

An alpha particle is the same as a helium-4 nucleus, and both mass number and atomic
number are the same.

Alpha decay is by far the most common form of cluster decay where the parent atom ejects
a defined daughter collection of nucleons, leaving another defined product behind (in nuclear
fission, a number of different pairs of daughters of approximately equal size are formed). Alpha
decay is the most likely cluster decay because of the combined extremely high binding energy and
relatively small mass of the helium-4 product nucleus (the alpha particle).

Alpha decay, like other cluster decays, is fundamentally a quantum tunneling process. Unlike
beta decay, alpha decay is governed by the interplay between the nuclear force and the
electromagnetic force.

Alpha decay is a mode of radioactive decay seen only in heavier nuclides, with the lightest
known alpha emitter being the lightest isotopes (mass numbers 106ʹ110) of tellurium (element 5).

Alpha particles have a typical kinetic energy of 5 MeV (that is, у 0.13% of their total energy,
i.e. 110 TJ/kg) and a speed of 15,000 km/s. This corresponds to a speed of around 0.05 c. There is
surprisingly small variation around this energy, due to the heavy dependence of the half-life of this
process on the energy produced (see equations in the Geigerʹ÷uttall law).

Because of their relatively large mass, + electric charge and relatively low velocity, alpha
particles are very likely to interact with other atoms and lose their energy, so their forward motion is
effectively stopped within a few centimeters of air.

Alpha source beneath a radiation detector


Most of the helium produced on Earth (approximately % of it) is the result of the alpha
decay of underground deposits of minerals containing uranium or thorium. The helium is brought to
the surface as a byproduct of natural gas production.

p 

In nuclear physics, beta decay is a type of radioactive decay in which a beta particle (an electron
or a positron) is emitted. In the case of electron emission, it is referred to as et mi s (ɴо), while in
the case of a positron emission as et l s (ɴ+). Kinetic energy of beta particles has continuous
spectrum ranging from 0 to maximal available energy (Q), which depends on parent and daughter
nuclear states participating in the decay. Typical Q is around 1 MeV, but it can range from a few keV
to a few tens of MeV. Like the equivalence of energy of the rest mass of electron is 511 keV, the
most energetic beta particles are ultrarelativistic, with speeds very close to the speed of light.

In ɴо decay, the weak interaction converts a neutron (n) into a proton (p) while emitting an
electron (eо) and an electron antineutrino (ʆe):

n ї p + eо + ʆe

At the fundamental level (as depicted in the Feynman diagram below), this is due to the
conversion of a down quark to an up quark by emission of a Wо boson the Wо boson subsequently
decays into an electron and an electron antineutrino.

ɴо decay in an atomic nucleus. The The Feynman diagram for ɴо decay of a neutron into a
intermediate emission of a virtual Wо boson is proton, electron, and electron antineutrino via an
omitted. intermediate Wо boson

ɴо decay generally occurs in a neutron rich nucleus.

In ɴ+ decay, energy is used to convert a proton into a neutron, a positron (e+) and a neutrino (ʆe):

energy + p ї n + e+ + ʆe

So, unlike ɴо, ɴ+ decay cannot occur in isolation, because it requires energy, the mass of the
neutron being greater than the mass of the proton. ɴ+ decay can only happen inside nuclei when the
value of the binding energy of the mother nucleus is less than that of the daughter nucleus. The
difference between these energies goes into the reaction of converting a proton into a neutron, a
positron and a neutrino and into the kinetic energy of these particles.
mp   
) 
 *

In all the cases where ɴ+ decay is allowed energetically (and the proton is a part of a nucleus
with electron shells), it is accompanied by the electron capture process, when an atomic electron is
captured by a nucleus with the emission of a neutrino:

energy + p + eо ї n + ʆe

But if the energy difference between initial and final states is less than mec, then ɴ+ decay
is not energetically possible, and electron capture is the sole decay mode.

This decay is also called K-capture, because the 'inner most' electron of an atom belongs to
the K-shell of the electronic configuration of the atom and this has the highest probability to interact
with the nucleus.

mp 
   %
  

If the proton and neutron are part of an atomic nucleus, these decay processes transmute one
chemical element into another. For example:

13755Cs ї 13756Ba + eо + ʆe (beta minus decay)

11÷a ї 10÷e + e+ + ʆe (beta plus decay)

11÷a + eо ї 10÷e + ʆe (electron capture)

Beta decay does not change the number of nucleons, A, in the nucleus but changes only its
charge, Z. Thus the set of all nuclides with the same A can be introduced these is ric nuclides may
turn into each other via beta decay. Among them, several nuclides (at least one) are beta stable,
because they present local minima of the mass excess: if such a nucleus has (A, Z) numbers, the
neighbour nuclei (A, Zо1) and (A, Z+1) have higher mass excess and can beta decay into (A, Z), but
not vice versa. For all odd mass numbers A the global minimum is also the unique local minimum.
For even A, there are up to three different beta-stable isobars experimentally known for example,
640Zr, 64Mo, and 644Ru are all beta-stable, though the first one can undergo a very rare
double beta decay (see below). There are about 355 known beta-decay stable nuclides total.

A beta-stable nucleus may undergo other kinds of radioactive decay (alpha decay, for
example). In nature, most isotopes are beta stable, but a few exceptions exist with half-lives so long
that they have not had enough time to decay since the moment of their nucleosynthesis. One
example is 401 K, which undergoes all three types of beta decay (ɴо, ɴ+ and electron capture) with a
half life of 1.77×10 years.

p 

Artist's impression of an emission of a gamma ray (r) from an atomic nucleus

Gamma rays (denoted as ɶ) are electromagnetic radiation of high frequency (very short
wavelength). They are produced by sub-atomic particle interactions such as electron-positron
annihilation, neutral pion decay, radioactive decay, fusion, fission or inverse Compton scattering in
astrophysical processes. Gamma rays typically have frequencies above 101 Hz, and therefore have
energies above 100 keV and wavelength less than 10 picometers, often smaller than an atom.
Gamma radioactive decay photons commonly have energies of a few hundred keV, and are almost
always less than 10 MeV in energy.

Because they are a form of ionizing radiation, gamma rays can cause serious damage when
absorbed by living tissue and, are therefore a health hazard.

Paul Villard, a French chemist and physicist, discovered gamma radiation in 1 00, while studying
radiation emitted from radium. Alpha and beta "rays" had already been separated and named by the
work of Ernest Rutherford in 18 , and in 1 03 Rutherford named Villard's distinct new radiation
"gamma rays."

In the past, the distinction between X-rays and gamma rays was based on energy (or
equivalently frequency or wavelength), with gamma rays being considered a higher- energy version
of X-rays. However, modern high-energy (megavoltage) X-rays produced by linear accelerators
("linacs") for megavoltage treatment, in cancer radiotherapy usually have higher energy than gamma
rays produced by radioactive gamma decay. Conversely, one of the most common gamma-ray
emitting isotopes used in diagnostic nuclear medicine, technetium- m, produces gamma radiation
of about the same energy (140 KeV) as produced by a diagnostic X-ray machine, and significantly
lower energy than therapeutic photons from linacs. Because of this broad overlap in energy ranges,
the two types of electromagnetic radiation are now usually defined by their origin: X-rays are
emitted by electrons outside the nucleus, while gamma rays are emitted by the nucleus (that is,
produced by gamma decay), or from other particle decays or annihilation events. There is no lower
limit to the energy of photons produced by nuclear reactions, and thus ultraviolet and even lower
energy photons produced by these processes would also be defined as "gamma rays".[1]

In certain fields such as astronomy, gamma rays and X-rays are still sometimes defined by
energy, or used interchangably, since the processes which produce them may be uncertain.

mp  

!
!

!
!

The measure of gamma rays' ionizing ability is called the exposure:

The coulomb per kilogram (C/kg) is the SI unit of ionizing radiation exposure, and is the amount of
radiation required to create 1 coulomb of charge of each polarity in 1 kilogram of matter.

The röntgen (R) is an obsolete traditional unit of exposure, which represented the amount of
radiation required to create 1 esu of charge of each polarity in 1 cubic centimeter of dry air. 1
röntgen = .58×10о4 C/kg

However, the effect of gamma and other ionizing radiation on living tissue is more closely
related to the amount of energy deposited rather than the charge. This is called the absorbed dose:

The gray (Gy), which has units of (J/kg), is the SI unit of absorbed dose, and is the amount of
radiation required to deposit 1 joule of energy in 1 kilogram of any kind of matter.

The rad is the (obsolete) corresponding traditional unit, equal to 0.01 J deposited per kg. 100 rad = 1
Gy.

The equivalent dose is the measure of the biological effect of radiation on human tissue. For
gamma rays it is equal to the absorbed dose.

The sievert (Sv) is the SI unit of equivalent dose, which for gamma rays is numerically equal to the
gray (Gy).

The rem is the traditional unit of equivalent dose. For gamma rays it is equal to the rad or 0.01 J of
energy deposited per kg. 1 Sv = 100 rem.

mp  !  

Shielding from gamma rays requires large amounts of mass. They are better absorbed by
materials with high atomic numbers and high density, although neither effect is important compared
to the total mass per area in the path of the gamma ray. For this reason, a lead shield is only
modestly better (0-30+) as a gamma shield than an equal mass of another shielding material such
as aluminium, concrete, or soil; the lead's major advantage is in its compactness.

The higher the energy of the gamma rays, the thicker the shielding required. Materials for
shielding gamma rays are typically measured by the thickness required to reduce the intensity of the
gamma rays by one half (the half value layer or HVL). For example gamma rays that require 1 cm
(0.4") of lead to reduce their intensity by 50% will also have their intensity reduced in half by 4.1 cm
of Granite rock, 6 cm (") of concrete, or cm (3") of packed soil. However, the mass of this
much concrete or soil is only 0-30% larger than that of this amount of lead. Depleted uranium is
used for shielding in portable gamma ray sources, but again the savings in weight over lead is
modest, and the main effect is to reduce shielding bulk.

mp     

The total absorption coefficient of aluminium (atomic number 13) for gamma rays, plotted versus
gamma energy, and the contributions by the three effects. Over most of the energy region shown,
the Compton effect dominates.

The total absorption coefficient of lead (atomic number 8) for gamma rays, plotted versus gamma
energy, and the contributions by the three effects. Here, the photoelectric effect dominates at low
energy. Above 5 MeV, pair production starts to dominate

When a gamma ray passes through matter, the probability for absorption in a thin layer is
proportional to the thickness of that layer. This leads to an exponential decrease of intensity with
thickness. The exponential absorption holds only for a narrow beam of gamma rays. If a wide beam
of gamma rays passes through a thick slab of concrete the scattering from the sides reduces the
absorption.
Here ʅ = ʍ is the absorption coefficient, measured in cmо1,  the number of atoms per cm3
in the material, ʍ the absorption cross section in cm and the thickness of material in cm.

In passing through matter, gamma radiation ionizes via three main processes: the photoelectric
effect, Compton scattering, and pair production.

˜p Photoelectric effect: This describes the case in which a gamma photon interacts with and
transfers its energy to an atomic electron, ejecting that electron from the atom. The kinetic
energy of the resulting photoelectron is equal to the energy of the incident gamma photon
minus the binding energy of the electron. The photoelectric effect is the dominant energy
transfer mechanism for x-ray and gamma ray photons with energies below 50 keV (thousand
electron volts), but it is much less important at higher energies.
˜p Compton scattering: This is an interaction in which an incident gamma photon loses enough
energy to an atomic electron to cause its ejection, with the remainder of the original
photon's energy being emitted as a new, lower energy gamma photon with an emission
direction different from that of the incident gamma photon. The probability of Compton
scatter decreases with increasing photon energy. Compton scattering is thought to be the
principal absorption mechanism for gamma rays in the intermediate energy range 100 keV
to 10 MeV. Compton scattering is relatively independent of the atomic number of the
absorbing material, which is why very dense metals like lead are only modestly better
shields, on a h basis, than are less dense materials.
˜p Pair production: This becomes possible with gamma energies exceeding 1.0 MeV, and
becomes important as an absorption mechanism at energies over about 5 MeV (see
illustration at right, for lead). By interaction with the electric field of a nucleus, the energy of
the incident photon is converted into the mass of an electron-positron pair. Any gamma
energy in excess of the equivalent rest mass of the two particles (1.0 MeV) appears as the
kinetic energy of the pair and the recoil nucleus. At the end of the positron's range, it
combines with a free electron. The entire mass of these two particles is then converted into
two gamma photons of at least 0.51 MeV energy each (or higher according to the kinetic
energy of the annihilated particles).

The secondary electrons (and/or positrons) produced in any of these three processes frequently
have enough energy to produce much ionization themselves.

mp 


 

Gamma rays are often produced alongside other forms of radiation such as alpha or beta. When a
nucleus emits an ɲ or ɴ particle, the daughter nucleus is sometimes left in an excited state. It can
then jump down to a lower energy state by emitting a gamma ray, in much the same way that an
atomic electron can jump to a lower energy state by emitting infrared, visible, or ultraviolet light.
Decay scheme of 60Co

Gamma rays, x-rays, visible light, and radio waves are all forms of electromagnetic radiation.
The only difference is the frequency and hence the energy of the photons. Gamma rays are the most
energetic. An example of gamma ray production follows.

First 60Co decays to excited 60÷i by beta decay. Then the 60÷i drops down to the ground state
(see nuclear shell model) by emitting two gamma rays in succession (1.17 MeV then 1.33 MeV):

607Co ї 608÷i* + eо + ʆe + ɶ + 1.17 MeV

608÷i* ї 608÷i + ɶ + 1.33 MeV

Another example is the alpha decay of 41Am to form 37÷p this alpha decay is accompanied
by gamma emission. In some cases, the gamma emission spectrum for a nucleus (daughter nucleus)
is quite simple, (eg 60Co/60÷i) while in other cases, such as with (41Am/37÷p and 1 Ir/1 Pt), the
gamma emission spectrum is complex, revealing that a series of nuclear energy levels can exist. The
fact that an alpha spectrum can have a series of different peaks with different energies reinforces
the idea that several nuclear energy levels are possible.

Image of entire sky in 100 MeV or greater gamma rays as seen by the EGRET instrument aboard the
CGRO spacecraft. Bright spots within the galactic plane are pulsars while those above and below the
plane are thought to be quasars.
Because a beta decay is accompanied by the emission of a neutrino which also carries
energy away, the beta spectrum does not have sharp lines, but instead is a broad peak. Hence from
beta decay alone it is not possible to probe the different energy levels found in the nucleus.

In optical spectroscopy, it is well known that an entity which emits light can also absorb light
at the same wavelength (photon energy). For instance, a sodium flame can emit yellow light as well
as absorb the yellow light from a sodium vapor lamp. In the case of gamma rays, this can be seen in
Mössbauer spectroscopy. Here, a correction for the energy lost by the recoil of the nucleus is made
and the exact conditions for gamma ray absorption through resonance can be attained.

This is similar to the Franck Condon effects seen in optical spectroscopy.

mp ! 
! !

All ionizing radiation causes similar damage at a cellular level, but because rays of alpha particles
and beta particles are relatively non-penetrating, external exposure to them causes only localized
damage, e.g. radiation burns to the skin. Gamma rays and neutrons are more penetrating, causing
diffuse damage throughout the body (e.g. radiation sickness, increased incidence of cancer) rather
than burns. External radiation exposure should also be distinguished from internal exposure, due to
ingested or inhaled radioactive substances, which, depending on the substance's chemical nature,
can produce both diffuse and localized internal damage. The most biological damaging forms of
gamma radiation occur in the gamma ray window, between 3 and 10 MeV, with higher energy
gamma rays being less harmful because the body is relatively transparent to them. See cobalt-60.

mp !

Gamma-ray image of a truck taken with a VACIS (Vehicle and Container Imaging System)

This property means that gamma radiation is often used to kill living organisms, in a process
called irradiation. Applications of this include sterilizing medical equipment (as an alternative to
autoclaves or chemical means), removing decay-causing bacteria from many foods or preventing
fruit and vegetables from sprouting to maintain freshness and flavor.

Gamma-rays have the smallest wavelengths and the most energy of any wave in the
electromagnetic spectrum. These waves are generated by radioactive atoms and in nuclear
explosions. Gamma-rays can kill living cells, a fact which medicine uses to its advantage, using
gamma-rays to kill cancerous cells.

Gamma-rays travel to us across vast distances of the universe, only to be absorbed by the
Earth's atmosphere. Different wavelengths of light penetrate the Earth's atmosphere to different
depths. Instruments aboard high-altitude balloons and satellites like the Compton Observatory
provide our only view of the gamma-ray sky.

Due to their tissue penetrating property, gamma rays/X-rays have a wide variety of medical
uses such as in CT Scans and radiation therapy ( ) ). However, as a form of ionizing radiation
they have the ability to effect molecular changes, giving them the potential to cause cancer when
DNA is affected. The molecular changes can also be used to alter the properties of semi-precious
stones, and is often used to change white topaz into blue topaz.

Despite their cancer-causing properties, gamma rays are also used to treat some types of
cancer. In the procedure called gamma-knife surgery, multiple concentrated beams of gamma rays
are directed on the growth in order to kill the cancerous cells. The beams are aimed from different
angles to concentrate the radiation on the growth while minimizing damage to the surrounding
tissues. (As an illustration of the radiation origin-process contributing to its name, a similar
technique which uses photons from linacs rather than cobalt gamma decay, is called "Cyberknife").

The Moon as seen by the Compton Gamma Ray Observatory, in gamma rays of greater than 0 MeV.
These are produced by cosmic ray bombardment of its surface. The Sun, which has no similar surface
of high atomic number to act as target for cosmic rays, cannot be seen at all at these energies, which
are too high to emerge from primary nuclear reactions, such as solar nuclear fusion.

Gamma rays are also used for diagnostic purposes in nuclear medicine. Several gamma-
emitting radioisotopes are used, one of which is technetium- m. When administered to a patient, a
gamma camera can be used to form an image of the radioisotope's distribution by detecting the
gamma radiation emitted. Such a technique can be employed to diagnose a wide range of conditions
(e.g. spread of cancer to the bones).

In the US, gamma ray detectors are beginning to be used as part of the Container Security
Initiative (CSI). These US$5 million machines are advertised to scan 30 containers per hour. The
objective of this technique is to screen merchant ship containers before they enter US ports.

mp 
!!

After gamma-irradiation, and the breaking of DNA double-strands, a cell can repair the damaged
genetic material to the limit of its capability. However, a study of Rothkamm and Lobrich has shown
that the repairing process works well after high-dose exposure but is much slower in the case of a
low-dose exposure. [3]

mp  
!!

The natural outdoor exposure in Great Britain ranges from  × 10о7 to 4 × 10о7 cSv/h
(centisieverts per hour).[4] Natural exposure to gamma rays is about 0.1 to 0. cSv per year, and the
average total amount of radiation received in one year per inhabitant in the USA is 0.36 cSv.[5]

By comparison, the radiation dose from chest radiography is a fraction of the annual naturally
occurring background radiation dose,[6] and the dose from fluoroscopy of the stomach is, at most, 5
cSv on the skin of the back.

For acute full-body equivalent dose, 100 cSv causes slight blood changes; 00ʹ350 cSv causes
nausea, hair loss, hemorrhaging and will cause death in a sizable number of cases (10 ,ʹ35 ,)
without medical treatment; 500 cSv is considered approximately the LD50 (lethal dose for 50, of
exposed population) for an acute exposure to radiation even with standard medical treatment; more
than 500 cSv brings an increasing chance of death; eventually, above 750ʹ1000 cSv, even
extraordinary treatment, such as bone-marrow transplants, will not prevent the death of the
individual exposed (see Y   ).[
 
  ][
   ]

For low dose exposure, for example among nuclear workers, who receive an average yearly
radiation dose of 1. cSv,[
 
  ] the risk of dying from cancer (excluding leukemia) increases
by  percent. For a dose of 10 cSv, that risk increase is at 10 percent. By comparison, risk of dying
from cancer was increased by 3 percent for the survivors of the atomic bombing of Hiroshima and
Nagasaki.

p
Ô 

 pp
 p

p EYE

Schematic diagram of the vertebrate eye Compound eye of Antarctic krill

Eyes are organs that detect light, and send electrical impulses along the optic nerve to the visual
and other areas of the brain. Complex optical systems with resolving power have come in ten
fundamentally different forms, and 6% of animal species possess a complex optical system.[1]
Image-resolving eyes are present in cnidaria, molluscs, chordates, annelids and arthropods.[]

The simplest "eyes", such as those in unicellular organisms, do nothing but detect whether the
surroundings are light or dark, which is sufficient for the entrainment of circadian rhythms. From
more complex eyes, retinal photosensitive ganglion cells send signals along the retinohypothalamic
tract to the suprachiasmatic nuclei to effect circadian adjustment.

p  - 


Evolution of the eye

The common origin (monophyly) of all animal eyes is now widely accepted as fact based on
shared anatomical and genetic features of all eyes that is, all modern eyes, varied as they are, have
their origins in a proto-eye believed to have evolved some 540 million years ago.[10][11][1] The
majority of the advancements in early eyes are believed to have taken only a few million years to
develop, since the first predator to gain true imaging would have touched off an "arms race".[13] Prey
animals and competing predators alike would be at a distinct disadvantage without such capabilities
and would be less likely to survive and reproduce. Hence multiple eye types and subtypes developed
in parallel.

Eyes in various animals show adaption to their requirements. For example, birds of prey
have much greater visual acuity than humans, and some can see ultraviolet light. The different forms
of eye in, for example, vertebrates and mollusks are often cited as examples of parallel evolution,
despite their distant common ancestry.

The earliest eyes, called "eyespots", were simple patches of photoreceptor cells, physically
similar to the receptor patches for taste and smell. These eyespots could only sense ambient
brightness: they could distinguish light and dark, but not the direction of the lightsource.[14] This
gradually changed as the eyespot depressed into a shallow "cup" shape, granting the ability to
slightly discriminate directional brightness by using the angle at which the light hit certain cells to
identify the source. The pit deepened over time, the opening diminished in size, and the number of
photoreceptor cells increased, forming an effective pinhole camera that was capable of slightly
distinguishing dim shapes.[15]

The thin overgrowth of transparent cells over the eye's aperture, originally formed to
prevent damage to the eyespot, allowed the segregated contents of the eye chamber to specialize
into a transparent humour that optimized colour filtering, blocked harmful radiation, improved the
eye's refractive index, and allowed functionality outside of water. The transparent protective cells
eventually split into two layers, with circulatory fluid in between that allowed wider viewing angles
and greater imaging resolution, and the thickness of the transparent layer gradually increased, in
most species with the transparent crystallin protein.[16]

The gap between tissue layers naturally formed a bioconvex shape, an optimally ideal
structure for a normal refractive index. Independently, a transparent layer and a nontransparent
layer split forward from the lens: the cornea and iris. Separation of the forward layer again forms a
humour, the aqueous humour. This increases refractive power and again eases circulatory problems.
Formation of a nontransparent ring allows more blood vessels, more circulation, and larger eye
sizes.

p # |

|#|

There are ten different eye layouts Ͷ indeed every way of capturing an image known to man,
with the exceptions of zoom and Fresnel lenses. Eye types can be categorized into "simple eyes",
with one concave chamber, and "compound eyes", which comprise a number of individual lenses
laid out on a convex surface.[1] Note that "simple" does not imply a reduced level of complexity or
acuity. Indeed, any eye type can be adapted for almost any behavior or environment. The only
limitations specific to eye types are that of resolution Ͷ the physics of compound eyes prevents
them from achieving a resolution better than 1°. Also, superposition eyes can achieve greater
sensitivity than apposition eyes, so are better suited to dark-dwelling creatures.[1] Eyes also fall into
two groups on the basis of their photoreceptor's cellular construction, with the photoreceptor cells
either being cilliated (as in the vertebrates) or rhabdomeric. These two groups are not monophyletic;
the cnidaria also possess cilliated cells, [17] and some annelids possess both.[18]

mp Normal eyes

Simple eyes are rather ubiquitous, and lens-bearing eyes have evolved at least seven times in
vertebrates, cephalopods, annelids, crustacea and cubozoa.[1 ]

mp Pit eyes

Pit eyes, also known as stemma, are eye-spots which may be set into a pit to reduce the angles of
light that enters and affects the eyespot, to allow the organism to deduce the angle of incoming
light.[1] Found in about 85 . of phyla, these basic forms were probably the precursors to more
advanced types of "simple eye". They are small, comprising up to about 100 cells covering about
100 µm.[1] The directionality can be improved by reducing the size of the aperture, by incorporating a
reflective layer behind the receptor cells, or by filling the pit with a refractile material.[1]

mp h
  

The resolution of pit eyes can be greatly improved by incorporating a material with a higher
refractive index to form a lens, which may greatly reduce the blur radius encountered Ͷ hence
increasing the resolution obtainable.[1] The most basic form, still seen in some gastropods and
annelids, consists of a lens of one refractive index. A far sharper image can be obtained using
materials with a high refractive index, decreasing to the edges; this decreases the focal length and
thus allows a sharp image to form on the retina.[1] This also allows a larger aperture for a given
sharpness of image, allowing more light to enter the lens; and a flatter lens, reducing spherical
aberration.[1] Such an inhomogeneous lens is necessary in order for the focal length to drop from
about 4 times the lens radius, to .5 radii.[1]

Heterogeneous eyes have evolved at least eight times: four or more times in gastropods,
once in the copepods, once in the annelids and once in the cephalopods.[1] No aquatic organisms
possess homogeneous lenses; presumably the evolutionary pressure for a heterogeneous lens is
great enough for this stage to be quickly "outgrown".[1]

This eye creates an image that is sharp enough that motion of the eye can cause significant
blurring. To minimize the effect of eye motion while the animal moves, most such eyes have
stabilizing eye muscles.[1]

The ocelli of insects bear a simple lens, but their focal point always lies behind the retina;
consequently they can never form a sharp image. This capitulates the function of the eye. Ocelli (pit-
type eyes of arthropods) blur the image across the whole retina, and are consequently excellent at
responding to rapid changes in light intensity across the whole visual field; this fast response is
further accelerated by the large nerve bundles which rush the information to the brain.[0] Focusing
the image would also cause the sun's image to be focused on a few receptors, with the possibility of
damage under the intense light; shielding the receptors would block out some light and thus reduce
their sensitivity.[0] This fast response has led to suggestions that the ocelli of insects are used mainly
in flight, because they can be used to detect sudden changes in which way is up (because light,
especially UV light which is absorbed by vegetation, usually comes from above).[0]

mp Weaknesses

One weakness of this eye construction is that chromatic aberration is still quite high[1], although
for organisms without color vision, this is a very minor concern.

A weakness of the vertebrate eye is the blind spot at the optic disc where the optic nerve is
formed at the back of the eye; there are no light sensitive rods or cones to respond to a light
stimulus at this point. By contrast, the cephalopod eye has no blind spot as the retina is in the
opposite orientation.

mp ›   

Some marine organisms bear more than one lens; for instance the copepod  has three.
The outer has a parabolic surface, countering the effects of spherical aberration while allowing a
sharp image to be formed. Another copepod,  *s eyes have two lenses, arranged like those in a
telescope.[1] Such arrangements are rare and poorly understood, but represent an interesting
alternative construction. An interesting use of multiple lenses is seen in some hunters such as eagles
and jumping spiders, which have a refractive cornea (discussed next): these have a negative lens,
enlarging the observed image by up to 50 / over the receptor cells, thus increasing their optical
resolution.[1]

mp Y

 

In the eyes of most terrestrial vertebrates (along with spiders and some insect larvae) the
vitreous fluid has a higher refractive index than the air, relieving the lens of the function of reducing
the focal length. This has freed it up for fine adjustments of focus, allowing a very high resolution to
be obtained.[1] As with spherical lenses, the problem of spherical aberration caused by the lens can
be countered either by using an inhomogeneous lens material, or by flattening the lens.[1] Flattening
the lens has a disadvantage: the quality of vision is diminished away from the main line of focus,
meaning that animals requiring all-round vision are detrimented. Such animals often display an
inhomogeneous lens instead.[1]

As mentioned above, a refractive cornea is only useful out of water; in water, there is no
difference in refractive index between the vitreous fluid and the surrounding water. Hence creatures
which have returned to the water Ͷ penguins and seals, for example Ͷ lose their refractive cornea
and return to lens-based vision. An alternative solution, borne by some divers, is to have a very
strong cornea.[1]

mp Y
 

An alternative to a lens is to line the inside of the eye with " mirrors", and reflect the image to
focus at a central point.[1] The nature of these eyes means that if one were to peer into the pupil of
an eye, one would see the same image that the organism would see, reflected back out.[1]

Many small organisms such as rotifers, copeopods and platyhelminths use such organs, but
these are too small to produce usable images.[1] Some larger organisms, such as scallops, also use
reflector eyes. The scallop 
 has up to 100 millimeter-scale reflector eyes fringing the edge of
its shell. It detects moving objects as they pass successive lenses.[1]

There is at least one vertebrate, the spookfish, whose eyes include reflective optics for focusing
of light. Each of the two eyes of a spookfish collects light from both above and below; the light
coming from the above is focused by a lens, while that coming from below, by a curved mirror
composed of many layers of small reflective plates made of guanine crystals.[1]

mp Compound eyes

An image of a house fly compound eye Arthropods such as this carpenter bee have
surface by using Scanning Electron compound eyes
Microscope at X457 magnification

A compound eye may consist of thousands of individual photoreceptor units. The image
perceived is a combination of inputs from the numerous ommatidia (individual "eye units"), which
are located on a convex surface, thus pointing in slightly different directions. Compared with simple
eyes, compound eyes possess a very large view angle, and can detect fast movement and, in some
cases, the polarization of light.[] Because the individual lenses are so small, the effects of diffraction
impose a limit on the possible resolution that can be obtained. This can only be countered by
increasing lens size and number. To see with a resolution comparable to our simple eyes, humans
would require compound eyes which would each reach the size of their head.

Compound eyes fall into two groups: apposition eyes, which form multiple inverted images,
and superposition eyes, which form a single erect image.[3] Compound eyes are common in
arthropods, and are also present in annelids and some bivalved molluscs.[4]

Compound eyes, in arthropods at least, grow at their margins by the addition of new ommatidia.[5]

Structure of the ommatidia of appositon compound eyes

mp   

Apposition eyes are the most common form of eye, and are presumably the ancestral form of
compound eye. They are found in all arthropod groups, although they may have evolved more than
once within this phylum.[1] Some annelids and bivalves also have apposition eyes. They are also
possessed by 'ï  , the horseshoe crab, and there are suggestions that other chelicerates
developed their simple eyes by reduction from a compound starting point.[1] (Some caterpillars
appear to have evolved compound eyes from simple eyes in the opposite fashion.)

Apposition eyes work by gathering a number of images, one from each eye, and combining them
in the brain, with each eye typically contributing a single point of information.

The typical apposition eye has a lens focusing light from one direction on the rhabdom, while
light from other directions is absorbed by the dark wall of the ommatidium. In the other kind of
apposition eye, found in the Strepsiptera, lenses are not fused to one another, and each forms an
entire image; these images are combined in the brain. This is called the schizochroal compound eye
or the neural superposition eye. Because images are combined additively, this arrangement allows
vision under lower light levels.[1]
mp   

The second type is named the superposition eye. The superposition eye is divided into three
types; the refracting, the reflecting and the parabolic superposition eye. The refracting superposition
eye has a gap between the lens and the rhabdom, and no side wall. Each lens takes light at an angle
to its axis and reflects it to the same angle on the other side. The result is an image at half the radius
of the eye, which is where the tips of the rhabdoms are. This kind is used mostly by nocturnal
insects. In the parabolic superposition compound eye type, seen in arthropods such as mayflies, the
parabolic surfaces of the inside of each facet focus light from a reflector to a sensor array. Long-
bodied decapod crustaceans such as shrimp, prawns, crayfish and lobsters are alone in having
reflecting superposition eyes, which also has a transparent gap but uses corner mirrors instead of
lenses.

mp   
  

This eye type functions by refracting light, then using a parabolic mirror to focus the image; it
combines features of superposition and apposition eyes.

mp „ 

The compound eye of a dragonfly

Good fliers like flies or honey bees, or prey-catching insects like praying mantis or
dragonflies, have specialized zones of ommatidia organized into a fovea area which gives acute
vision. In the acute zone the eyes are flattened and the facets larger. The flattening allows more
ommatidia to receive light from a spot and therefore higher resolution.

There are some exceptions from the types mentioned above. Some insects have a so-called single
lens compound eye, a transitional type which is something between a superposition type of the
multi-lens compound eye and the single lens eye found in animals with simple eyes. Then there is
the mysid shrimp Õï  
  . The shrimp has an eye of the refracting superposition
type, in the rear behind this in each eye there is a single large facet that is three times in diameter
the others in the eye and behind this is an enlarged crystalline cone. This projects an upright image
on a specialized retina. The resulting eye is a mixture of a simple eye within a compound eye.

Another version is the pseudofaceted eye, as seen in Scutigera. This type of eye consists of a cluster
of numerous ocelli on each side of the head, organized in a way that resembles a true compound
eye.
The body of „h
ï   , a type of brittle star, is covered with ommatidia, turning its
whole skin into a compound eye. The same is true of many chitons.

p   |

|
|#|

The ciliary body is the circumferential tissue inside the eye composed of the ciliary muscle and
ciliary processes.[1] It is triangular in horizontal section and is coated by a double layer, the ciliary
epithelium. This epithelium produces the aqueous humor.[] The inner layer is transparent and
covers the vitreous body, and is continuous from the neural tissue of the retina. The outer layer is
highly pigmented, continuous with the retinal pigment epithelium, and constitutes the cells of the
dilator muscle.

The vitreous is the transparent, colourless, gelatinous mass that fills the space between the lens
of the eye and the retina lining the back of the eye. It is produced by certain retinal cells. It is of
rather similar composition to the cornea, but contains very few cells (mostly phagocytes which
remove unwanted cellular debris in the visual field, as well as the hyalocytes of Balazs of the surface
of the vitreous, which reprocess the hyaluronic acid), no blood vessels, and 8-  of its volume is
water (as opposed to 75 in the cornea) with salts, sugars, vitrosin (a type of collagen), a network of
collagen type II fibers with the mucopolysaccharide hyaluronic acid, and also a wide array of proteins
in micro amounts. If need be, if a human were to go 57 days without food or water, it is proven if
eaten the vitreous humour has enough nutrients to maintain the body for that period of time.
Amazingly, with so little solid matter, it tautly holds the eye. The lens, on the other hand, is tightly
packed with cells.[1] However, the vitreous has a viscosity two to four times that of pure water,
giving it a gelatinous consistency. It also has a refractive index of 1.336[].

p |   

|
| ||

Eyes are generally adapted to the environment and life requirements of the organism which
bears them. For instance, the distribution of photoreceptors tends to match the area in which the
highest acuity is required, with horizon-scanning organisms, such as those that live on the African
plains, having a horizontal line of high-density ganglia, while tree-dwelling creatures which require
good all-round vision tend to have a symmetrical distribution of ganglia, with acuity decreasing
outwards from the centre.

Of course, for most eye types, it is impossible to diverge from a spherical form, so only the
density of optical receptors can be altered. In organisms with compound eyes, it is the number of
ommatidia rather than ganglia that reflects the region of highest data acquisition. Optical
superposition eyes are constrained to a spherical shape, but other forms of compound eyes may
deform to a shape where more ommatidia are aligned to, say, the horizon, without altering the size
or density of individual ommatidia. Eyes of horizon-scanning organisms have stalks so they can be
easily aligned to the horizon when this is inclined, for example if the animal is on a slope. An
extension of this concept is that the eyes of predators typically have a zone of very acute vision at
their centre, to assist in the identification of prey. In deep water organisms, it may not be the centre
of the eye that is enlarged. The hyperiid amphipods are deep water animals that feed on organisms
above them. Their eyes are almost divided into two, with the upper region thought to be involved in
detecting the silhouettes of potential prey Ͷ or predators Ͷ against the faint light of the sky above.
Accordingly, deeper water hyperiids, where the light against which the silhouettes must be
compared is dimmer, have larger "upper-eyes", and may lose the lower portion of their eyes
altogether. Depth perception can be enhanced by having eyes which are enlarged in one direction;
distorting the eye slightly allows the distance to the object to be estimated with a high degree of
accuracy.
Acuity is higher among male organisms that mate in mid-air, as they need to be able to spot and
assess potential mates against a very large backdrop. On the other hand, the eyes of organisms
which operate in low light levels, such as around dawn and dusk or in deep water, tend to be larger
to increase the amount of light that can be captured.[

It is not only the shape of the eye that may be affected by lifestyle. Eyes can be the most visible
parts of organisms, and this can act as a pressure on organisms to have more transparent eyes at the
cost of function.

Eyes may be mounted on stalks to provide better all-round vision, by lifting them above an
organism's carapace; this also allows them to track predators or prey without moving the head.

p  
 #

A hawk's eye

Visual acuity is often measured in



    (CPD), which measures an angular
resolution, or how much an eye can differentiate one object from another in terms of visual angles.
Resolution in CPD can be measured by bar charts of different numbers of white/black stripe cycles.
For example, if each pattern is 1.75 cm wide and is placed at 1 m distance from the eye, it will
subtend an angle of 1 degree, so the number of white/black bar pairs on the pattern will be a
measure of the cycles per degree of that pattern. The highest such number that the eye can resolve
as stripes, or distinguish from a gray block, is then the measurement of visual acuity of the eye.

For a human eye with excellent acuity, the maximum theoretical resolution is 50 CPD(1.
arcminute per line pair, or a 0.35 mm line pair, at 1 m). A rat can resolve only about 1 to  CPD. A
horse has higher acuity through most of the visual field of its eyes than a human has, but does not
match the high acuity of the human eye's central fovea region.

Spherical aberration limits the resolution of a 7 mm pupil to about 3 arcminutes per line pair. At
a pupil diameter of 3 mm, the spherical aberration is greatly reduced, resulting in an improved
resolution of approximately 1.7 arcminutes per line pair. A resolution of  arcminutes per line pair,
equivalent to a 1 arcminute gap in an optotype, corresponds to 0/0 (normal vision) in humans.
p ||  

 

All organisms are restricted to a small range of the electromagnetic spectrum; this varies from
creature to creature, but is mainly between 400 and 700 nm. This is a rather small section of the
electromagnetic spectrum, probably reflecting the submarine evolution of the organ: water blocks
out all but two small windows of the EM spectrum, and there has been no evolutionary pressure
among land animals to broaden this range.

The most sensitive pigment, rhodopsin, has a peak response at 500 nm. Small changes to the
genes coding for this protein can tweak the peak response by a few nm; pigments in the lens can
also "filter" incoming light, changing the peak response. Many organisms are unable to discriminate
between colors, seeing instead in shades of "grey"; colour vision necessitates a range of pigment
cells which are primarily sensitive to smaller ranges of the spectrum. In primates, geckos, and other
organisms, these take the form of cone cells, from which the more sensitive rod cells evolved. Even if
organisms are physically capable of discriminating different colours, this does not necessarily mean
that they can perceive the different colours; only with behavioral tests can this be deduced.

Most organisms with colour vision are able to detect ultraviolet light. This high energy light can
be damaging to receptor cells. With a few exceptions (snakes, placental mammals), most organisms
avoid these effects by having absorbent oil droplets around their cone cells. The alternative,
developed by organisms that had lost these oil droplets in the course of evolution, is to make the
lens impervious to UV light Ͷ this precludes the possibility of any UV light being detected, as it does
not even reach the retina.

p 

|

The retina contains two major types of light-sensitive photoreceptor cells used for vision: the
rods and the cones.

Rods cannot distinguish colors, but are responsible for low-light (scotopic) monochrome (black-
and-white) vision; they work well in dim light as they contain a pigment, rhodopsin (visual purple),
which is sensitive at low light intensity, but saturates at higher (photopic) intensities. Rods are
distributed throughout the retina but there are none at the fovea and none at the blind spot. Rod
density is greater in the peripheral retina than in the central retina.

Cones are responsible for color vision. They require brighter light to function than rods require.
There are three types of cones, maximally sensitive to long-wavelength, medium-wavelength, and
short-wavelength light (often referred to as red, green, and blue, respectively, though the sensitivity
peaks are not actually at these colors). The color seen is the combined effect of stimuli to, and
responses from, these three types of cone cells. Cones are mostly concentrated in and near the
fovea. Only a few are present at the sides of the retina. Objects are seen most sharply in focus when
their images fall on the fovea, as when one looks at an object directly. Cone cells and rods are
connected through intermediate cells in the retina to nerve fibers of the optic nerve. When rods and
cones are stimulated by light, the nerves send off impulses through these fibers to the brain.

p | 

The pigment molecules used in the eye are various, but can be used to define the evolutionary
distance between different groups, and can also be an aid in determining which are closely related ʹ
although problems of convergence do exist.
Opsins are the pigments involved in photoreception. Other pigments, such as melanin, are
used to shield the photoreceptor cells from light leaking in from the sides. The opsin protein group
evolved long before the last common ancestor of animals, and has continued to diversify since.

There are two types of opsin involved in vision c-opsins, which are associated with ciliary-
type photoreceptor cells, and r-opsins, associated with rhabdomeric photoreceptor cells. The eyes of
vertebrates usually contain cilliary cells with c-opsins, and (bilaterian) invertebrates have
rhabdomeric cells in the eye with r-opsins. However, some  li cells of vertebrates express r-
opsins, suggesting that their ancestors used this pigment in vision, and that remnants survive in the
eyes Likewise, c-opsins have been found to be expressed in the r i of some invertebrates. They
may have been expressed in ciliary cells of larval eyes, which were subsequently resorbed into the
brain on metamorphosis to the adult form. C-opsins are also found in some derived bilaterian-
invertebrate eyes, such as the pallial eyes of the bivalve molluscs however, the lateral eyes (which
were presumably the ancestral type for this group, if eyes evolved once there) always use r-opsins.
Cnidaria, which are an outgroup to the taxa mentioned above, express c-opsins - but r-opsins are yet
to be found in this group. Incidentally, the melanin produced in the cnidaria is produced in the same
fashion as that in vertebrates, suggesting the common descent of this pigment.

p  


? j   
      
   
           


  jj     
  
             
     j      
    !
    

  
" 
 
   "    j "  
" 
       
        j    
          

              "  
j  
  

The 
% & is an organ which reacts to light for several purposes.

As a conscious sense organ, the eye allows vision. Rod and cone cells in the retina allow
conscious light perception and vision including color differentiation and the perception of
depth. The human eye can distinguish about 16 million colors.

In common with the eyes of other mammals, the human eye's non-image-forming
photosensitive ganglion cells in the retina receive the light signals which affect adjustment
of the size of the pupil, regulation and suppression of the hormone melatonin and
entrainment of the body clock.
p ||
 | |

The eye is not properly a sphere, rather it is a fused two-piece unit. The smaller frontal unit,
more curved, called the cornea is linked to the larger unit called the sclera. The cornea and sclera are
connected by a ring called the limbus. The iris ʹ the color of the eye ʹ and its black center, the pupil,
are seen instead of the cornea due to the cornea's transparency. To see inside the eye, an
ophthalmoscope is needed, since light is not reflected out. The fundus (area opposite the pupil)
shows the characteristic pale optic disk (papilla), where vessels entering the eye pass across and
optic nerve fibers depart the globe.

p  | 

A human eye

The dimensions differ among adults by only one or two millimeters. The vertical measure,
generally less than the horizontal distance, is about 4 mm among adults, at birth about 16ʹ17 mm.
(about 0.65 inch) The eyeball grows rapidly, increasing to .5ʹ3 mm (approx. 0.8 in) by the age of
three years. From then to age 13, the eye attains its full size. The volume is 6.5 ml (0.4 cu. in.) and
the weight is 7.5 g. (0.5 oz.)

p  |

The eye is made up of three coats, enclosing three transparent structures. The outermost layer is
composed of the cornea and sclera. The middle layer consists of the choroid, ciliary body, and iris.
The innermost is the retina, which gets its circulation from the vessels of the choroid as well as the
retinal vessels, which can be seen in an ophthalmoscope.

Within these coats are the aqueous humor, the vitreous body, and the flexible lens. The aqueous
humor is a clear fluid that is contained in two areas: the anterior chamber between the cornea and
the iris and exposed area of the lens; and the posterior chamber, behind the iris and the rest. The
lens is suspended to the ciliary body by the suspensory ligament (Zonule of Zinn), made up of fine
transparent fibers. The vitreous body is a clear jelly that is much larger than the aqueous humor, and
is bordered by the sclera, zonule, and lens. They are connected via the pupil.[]

p # 
|

The retina has a static contrast ratio of around 100:1 (about 6 1/ f-stops). As soon as the eye
moves (saccades) it re-adjusts its exposure both chemically and geometrically by adjusting the iris
which regulates the size of the pupil. Initial dark adaptation takes place in approximately four
seconds of profound, uninterrupted darkness; full adaptation through adjustments in retinal
chemistry (the Purkinje effect) are mostly complete in thirty minutes. Hence, a dynamic contrast
ratio of about 1,000,000:1 (about 0 f-stops) is possible.[3] The process is nonlinear and multifaceted,
so an interruption by light merely starts the adaptation process over again. Full adaptation is
dependent on good blood flow; thus dark adaptation may be hampered by poor circulation, and
vasoconstrictors like alcohol or tobacco.

The eye includes a lens not dissimilar to lenses found in optical instruments such as cameras and
the same principles can be applied. The pupil of the human eye is its aperture; the iris is the
diaphragm that serves as the aperture stop. Refraction in the cornea causes the effective aperture
(the entrance pupil) to differ slightly from the physical pupil diameter. The entrance pupil is typically
about 4 mm in diameter, although it can range from  mm (/8.3) in a brightly lit place to 8 mm
(/.1) in the dark. The latter value decreases slowly with age, older people's eyes sometimes dilate
to not more than 5-6mm.

p  | 

 |

The approximate field of view of a human eye is 5° Out, 75° Down, 60° In, 60° Up. About 1ʹ15°
temporal and 1.5° below the horizontal is the optic nerve or blind spot which is roughly 7.5° in height
and 5.5° in width.

p |#|
  

Eye irritation is a common problem experienced by people of all ages. There are numerous
causes in which some can be prevented and treated properly. However, in order to take precaution
it is important to have some basic knowledge regarding what eye irritants are and where they can be
found in our environments. Eye irritation depends somewhat on destabilization of the outer-eye tear
film. Certain volatile organic compounds that are both chemically reactive and airway irritants may
cause eye irritation as well. Personal factors (eg, use of contact lenses, eye make-up, and certain
medication) may also affect destabilization of the tear film and possibly result in more eye
symptoms. Nevertheless, if airborne particles alone should destabilize the tear film and cause eye
irritation, their content of surface-active compounds must be high. An integrated physiological risk
model with blink frequency, destabilization, and break-up of the eye tear film as inseparable
phenomena may explain eye irritation among office workers in terms of occupational, climate, and
eye-related physiological risk factors.

In a study conducted by NIOSH, the frequency of reported symptoms in industrial buildings was
investigated. The study's results were that eye irritation was the most frequent symptom in
industrial building spaces, at 81. Modern office work with use of office equipment has raised
concerns about possible adverse health effects. Since the 1 70s, reports have linked mucosal, skin,
and general symptoms to work with self-copying paper. Emission of various particulate and volatile
substances has been suggested as specific causes. These symptoms have been related to Sick
Building Syndrome, which involves symptoms such as irritation to the eyes, skin, and upper airways,
headache and fatigue.

Many of the symptoms described in Sick Building Syndrome (SBS) and multiple chemical
sensitivity (MCS) resemble the symptoms known to be elicited by airborne irritant chemicals. A
repeated measurement design was employed in the study of acute symptoms of eye and respiratory
tract irritation resulting from occupational exposure to sodium borate dusts. The symptom
assessment of the 7 exposed and 7 unexposed subjects comprised interviews before the shift
began and then at regular hourly intervals for the next six hours of the shift, four days in a row.[ ]
Exposures were monitored concurrently with a personal real time aerosol monitor. Two different
exposure profiles, a daily average and short term (15 minute) average, were used in the analysis.
Exposure-response relations were evaluated by linking incidence rates for each symptom with
categories of exposure.
Acute incidence rates for nasal, eye, and throat irritation, and coughing and breathlessness were
found to be associated with increased exposure levels of both exposure indices. Steeper exposure-
response slopes were seen when short term exposure concentrations were used. Results from
multivariate logistic regression analysis suggest that current smokers tended to be less sensitive to
the exposure to airborne sodium borate dust.[ ]

p |#|
||

MRI scan of human eye

The visual system in the brain is too slow to process information if the images are slipping
across the retina at more than a few degrees per second. Thus, for humans to be able to see while
moving, the brain must compensate for the motion of the head by turning the eyes. Another
complication for vision in frontal-eyed animals is the development of a small area of the retina with
a very high visual acuity. This area is called the fovea, and covers about  degrees of visual angle in
people. To get a clear view of the world, the brain must turn the eyes so that the image of the object
of regard falls on the fovea. Eye movements are thus very important for visual perception, and any
failure to make them correctly can lead to serious visual disabilities.

Having two eyes is an added complication, because the brain must point both of them
accurately enough that the object of regard falls on corresponding points of the two retinas;
otherwise, double vision would occur. The movements of different body parts are controlled by
striated muscles acting around joints. The movements of the eye are no exception, but they have
special advantages not shared by skeletal muscles and joints, and so are considerably different.

p |' 
  |

Each eye has six muscles that control its movements: the lateral rectus, the medial rectus, the
inferior rectus, the superior rectus, the inferior oblique, and the superior oblique. When the muscles
exert different tensions, a torque is exerted on the globe that causes it to turn, in almost pure
rotation, with only about one millimeter of translation.[11] Thus, the eye can be considered as
undergoing rotations about a single point in the center of the eye.
p  
|#|
||

Rapid eye movement, or REM for short, typically refers to the sleep stage during which the most
vivid dreams occur. During this stage, the eyes move rapidly. It is not in itself a unique form of eye
movement.

p |

Saccades are quick, simultaneous movements of both eyes in the same direction controlled by
the frontal lobe of the brain. Some irregular drifts, movements, smaller than a saccade and larger
than a microsaccade, subtend up to six minutes of arc.

p   |

Even when looking intently at a single spot, the eyes drift around. This ensures that individual
photosensitive cells are continually stimulated in different degrees. Without changing input, these
cells would otherwise stop generating output. Microsaccades move the eye no more than a total of
0.° in adult humans.

p |    
| |'

The vestibulo-ocular reflex is a reflex eye movement that stabilizes images on the retina during
head movement by producing an eye movement in the direction opposite to head movement, thus
preserving the image on the center of the visual field. For example, when the head moves to the
right, the eyes move to the left, and vice versa.

p 
 
||

The eyes can also follow a moving object around. This tracking is less accurate than the
vestibulo-ocular reflex, as it requires the brain to process incoming visual information and supply
feedback. Following an object moving at constant speed is relatively easy, though the eyes will often
make saccadic jerks to keep up. The smooth pursuit movement can move the eye at up to 100°/s in
adult humans.

It is more difficult to visually estimate speed in low light conditions or while moving, unless there is
another point of reference for determining speed.

p  ( | 
| |'

The optokinetic reflex is a combination of a saccade and smooth pursuit movement. When, for
example, looking out of the window at a moving train, the eyes can focus on a 'moving' train for a
short moment (through smooth pursuit), until the train moves out of the field of vision. At this point,
the optokinetic reflex kicks in, and moves the eye back to the point where it first saw the train
(through a saccade).

p |||
||

The two eyes converge to point to the same object

When a creature with binocular vision looks at an object, the eyes must rotate around a
vertical axis so that the projection of the image is in the centre of the retina in both eyes. To look at
an object closer by, the eyes rotate 'towards each other' (convergence), while for an object farther
away they rotate 'away from each other' (divergence). Exaggerated convergence is called
  
 (focusing on the nose for example) . When looking into the distance, or when 'staring into
nothingness', the eyes neither converge nor diverge.

Vergence movements are closely connected to accommodation of the eye. Under normal
conditions, changing the focus of the eyes to look at an object at a different distance will
automatically cause vergence and accommodation.

There are many diseases, disorders, and age-related changes that may affect the eyes and
surrounding structures.

As the eye ages certain changes occur that can be attributed solely to the aging process.
Most of these anatomic and physiologic processes follow a gradual decline. With aging, the quality
of vision worsens due to reasons independent of aging eye diseases. While there are many changes
of significance in the nondiseased eye, the most functionally important changes seem to be a
reduction in pupil size and the loss of accommodation or focusing capability (presbyopia). The area
of the pupil governs the amount of light that can reach the retina. The extent to which the pupil
dilates also decreases with age. Because of the smaller pupil size, older eyes receive much less light
at the retina. In comparison to younger people, it is as though older persons wear medium-density
sunglasses in bright light and extremely dark glasses in dim light. Therefore, for any detailed visually
guided tasks on which performance varies with illumination, older persons require extra lighting.
Certain ocular diseases can come from sexually transmitted diseases such as herpes and genital
warts. If contact between eye and area of infection occurs, the STD can be transmitted to the eye.

With aging a prominent white ring develops in the periphery of the cornea- called arcus
senilis. Aging causes laxity and downward shift of eyelid tissues and atrophy of the orbital fat. These
changes contribute to the etiology of several eyelid disorders such as ectropion, entropion,
dermatochalasis, and ptosis. The vitreous gel undergoes liquefaction (posterior vitreous detachment
or PVD) and its opacities Ͷ visible as floaters Ͷ gradually increase in number.
Various eye care professionals, including ophthalmologists, optometrists, and opticians, are
involved in the treatment and management of ocular and vision disorders. A Snellen chart is one
type of eye chart used to measure visual acuity. At the conclusion of an eye examination, an eye
doctor may provide the patient with an eyeglass prescription for corrective lenses. Some disorders
of the eyes for which corrective lenses are prescribed include myopia (near-sightedness) which
affects one-third of the population, hyperopia (far-sightedness) which affects one quarter of the
population, and presbyopia, a loss of focusing range due to aging.

p 

  is a transparent, biconvex structure in the eye that, along with the cornea, helps to
refract light to be focused on the retina. The lens, by changing shape, functions to change the focal
distance of the eye so that it can focus on objects at various distances, thus allowing a sharp real
image of the object of interest to be formed on the retina. This adjustment of the lens is known as
accommodation (see also Accommodation, below). It is similar to the focusing of a photographic
camera via movement of its lenses. The lens is flatter on its anterior side.

The lens is also known as the  l (Latin, little stre m, dim. of  , 0 ter) or cryst llie
les. In humans, the refractive power of the lens in its natural environment is approximately 18
dioptres, roughly one-third of the eye's total power.

Light from a single point of a distant


object and light from a single point of a near
object being brought to a focus by changing
the curvature of the lens

Schematic diagram of the human eye

p 




The lens has three main parts: the lens capsule, the lens epithelium, and the lens fibers. The lens
capsule forms the outermost layer of the lens and the lens fibers form the bulk of the interior of the
lens. The cells of the lens epithelium, located between the lens capsule and the outermost layer of
lens fibers, are found only on the anterior side of the lens.
p |
 |

The lens capsule is a smooth, transparent basement membrane that completely surrounds the
lens. The capsule is elastic and is composed of collagen. It is synthesized by the lens epithelium and
its main components are Type IV collagen and sulfated glycosaminoglycans (GAGs).[1] The capsule is
very elastic and so causes the lens to assume a more globular shape when not under the tension of
the zonular fibers, which connect the lens capsule to the ciliary body. The capsule varies from -8
micrometres in thickness, being thickest near the equator and thinnest near the posterior pole.[1]'
The lens capsule may be involved with the higher anterior curvature than posterior of the lens.

p |
| | 

The lens epithelium, located in the anterior portion of the lens between the lens capsule and the
lens fibers, is a simple cuboidal epithelium.[1] The cells of the lens epithelium regulate most of the
homeostatic functions of the lens.[4] As ions, nutrients, and liquid enter the lens from the aqueous
humor, Na+/K+ ATPase pumps in the lens epithelial cells pump ions out of the lens to maintain
appropriate lens osmolarity and volume, with equatorially positioned lens epithelium cells
contributing most to this current. The activity of the Na+/K+ ATPases keeps water and current
flowing through the lens from the poles and exiting through the equatorial regions.

The cells of the lens epithelium also serve as the progenitors for new lens fibers. It constantly lays
down fibers in the embryo, fetus, infant, and adult, and continues to lay down fibers for lifelong
growth.

p |
 |

The lens fibers form the bulk of the lens. They are long, thin, transparent cells, firmly packed,
with diameters typically between 4-7 micrometres and lengths of up to 1 mm long.[1] The lens fibers
stretch lengthwise from the posterior to the anterior poles and, when cut horizontally, are arranged
in concentric layers rather like the layers of an onion. If cut along the equator, it appears as a
honeycomb. The middle of each fiber lies on the equator.[5] These tightly packed layers of lens fibers
are referred to as laminae. The lens fibers are linked together via gap junctions and interdigitations
of the cells that resemble ͞ball and socket͟ forms.

The lens is split into regions depending on the age of the lens fibers of a particular layer. Moving
outwards from the central, oldest layer, the lens is split into an embryonic nucleus, the fetal nucleus,
the adult nucleus, and the outer cortex. New lens fibers, generated from the lens epithelium, are
added to the outer cortex. Mature lens fibers have no organelles or nuclei.

p  
 
|
|

|
|

An image that is partially in focus, but mostly out of focus in varying degrees.

The lens is flexible and its curvature is controlled by ciliary muscles through the zonules. By
changing the curvature of the lens, one can focus the eye on objects at different distances from it.
This process is called accommodation. At short focal distance the ciliary muscle contracts, zonule
fibers loosen, and the lens thickens, resulting in a rounder shape and thus high refractive power.
Changing focus to an object at a greater distance requires the relaxation of the ciliary muscle, which
in turn increases the tension on the zonules, flattening the lens and thus increasing the focal
distance.

The refractive index of the lens varies from approximately 1.406 in the central layers down
to 1.386 in less dense cortex of the lens[6]. This index gradient enhances the optical power of the
lens.

Aquatic animals must rely entirely on their lens for both focusing and to provide almost the
entire refractive power of the eye as the water-cornea interface does not have a large enough
difference in indices of refraction to provide significant refractive power. As such, lenses in aquatic
eyes tend to be much rounder and harder.

p #  

 |#

Crystallins are water-soluble proteins that compose over 0 of the protein within the lens.[7]
The three main crystallin types found in the eye are ɲ-, ɴ-, and ɶ-crystallins. Crystallins tend to form
soluble, high-molecular weight aggregates that pack tightly in lens fibers, thus increasing the index
of refraction of the lens while maintaining its transparency. ɴ and ɶ crystallins are found primarily in
the lens, while subunits of ɲ -crystallin have been isolated from other parts of the eye and the body.
ɲ-crystallin proteins belong to a larger superfamily of molecular chaperone proteins, and so it is
believed that the crystallin proteins were evolutionarily recruited from chaperone proteins for
optical purposes.[8] The chaperone functions of ɲ -crystallin may also help maintain the lens proteins,
which must last a human for his/her entire lifetime.[8]

Another important factor in maintaining the transparency of the lens is the absence of light-
scattering organelles such as the nucleus, endoplasmic reticulum, and mitochondria within the
mature lens fibers. Lens fibers also have a very extensive cytoskeleton that maintains the precise
shape and packing of the lens fibers; disruptions/mutations in certain cytoskeletal elements can lead
to the loss of transparency. [ ]
p ||  |



Development of the human lens begins at the 4 mm embryonic stage. Unlike the rest of the eye,
which is derived mostly from the neural ectoderm, the lens is derived from the surface ectoderm.
The first stage of lens differentiation takes place when the optic vesicle, which is formed from
outpocketings in the neural ectoderm, comes in proximity to the surface ectoderm. The optic vesicle
induces nearby surface ectoderm to form the lens placode. At the 4 mm stage, the lens placode is a
single monolayer of columnar cells.

As development progresses, the lens placode begins to deepen and invaginate. As the placode
continues to deepen, the opening to the surface ectoderm constricts and the lens cells forms a
structure known as the lens vesicle. By the 10 mm stage, the lens vesicle has completely separated
from the surface ectoderm.

After the 10 mm stage, signals from the developing neural retina induces the cells closest to the
posterior end of the lens vesicle begin to elongate toward the anterior end of the vesicle.[10] These
signals also induce the synthesis of crystallins.[10] These elongating cells eventually fill in the lumen of
the vesicle to form the primary fibers, which become the embryonic nucleus in the mature lens. The
cells of the anterior portion of the lens vesicle give rise to the lens epithelium.

Additional secondary fibers are derived from lens epithelial cells located toward the equatorial
region of the lens. These cells lengthen anteriorly and posteriorly to encircle the primary fibers. The
new fibers grow longer than those of the primary layer, but as the lens gets larger, the ends of the
newer fibers cannot reach the posterior or anterior poles of the lens. The lens fibers that do not
reach the poles form tight, interdigitating seams with neighboring fibers. These seams are readily
visible and are termed sutures. The suture patterns become more complex as more layers of lens
fibers are added to the outer portion of the lens.

The lens continues to grow after birth, with the new secondary fibers being added as outer
layers. New lens fibers are generated from the equatorial cells of the lens epithelium, in a region
referred to as the germinative zone. The lens epithelial cells elongate, lose contact with the capsule
and epithelium, synthesize crystallin, and then finally lose their organelles as they become mature
lens fibers.[8] From development through early adulthood, the addition of secondary lens fibers
results in the lens growing more ellipsoid in shape; after about age 0, however, the lens grows
rounder with time.[1]

p   |

The lens is metabolically active and requires nourishment in order to maintain its growth and
transparency. Compared to other tissues in the eye, however, the lens has considerably low energy
demands. [11]

By nine weeks into human development, the lens is surrounded and nourished by a net of
vessels, the tunica vasculosa lentis, which is derived from the hyaloid artery.[10] Beginning in the
fourth month of development, the hyaloid artery and its related vasculature begin to atrophy and
completely disappear by birth.[1] In the postnatal eye, Cloquet͛s canal marks the former location of
the hyaloid artery.

After regression of the hyaloid artery, the lens receives all its nourishment from the aqueous
humor. Nutrients diffuse in and waste diffuses out through a constant flow of fluid from the
anterior/posterior poles of the lens and out of the equatorial regions, a dynamic that is maintained
by the ÷a+/K+ ATPase pumps located in the equatorially positioned cells of the lens epithelium.[13]

Glucose is the primary energy source for the lens. As mature lens fibers do not have
mitochondria, approximately 80% of the glucose is metabolized via anaerobic respiration.[14] The
remaining fraction of glucose is shunted primarily down the pentose phosphate pathway.[14] The lack
of aerobic respiration means that the lens consumes very little oxygen as well.[14]

p 

ëp Cataracts are opacities of the lens. While some are small and do not require any treatment,
others may be large enough to block light and obstruct vision. Cataracts usually develop as
the aging lens becomes more and more opaque, but cataracts can also form congenitally or
after injury to the lens. Diabetes is also a risk factor for cataract.

ëp Presbyopia is the age-related loss of accommodation, which is marked by the inability of the
eye to focus on nearby objects. The exact mechanism is still unknown, but age-related
changes in the hardness, shape, and size of the lens have all been linked to the condition.

ëp Ectopia lentis is the displacement of the lens from its normal position.

ëp Aphakia is the absence of the lens from the eye. Aphakia can be the result of surgery or
injury, or it can be congenital.

ëp ÷uclear sclerosis is an age-related change in the density of the lens nucleus that occurs in all
older animals.

mp     % 

   pp p p pp


pp  p p p! p! "p
p pp  pp p!p
 pp  p #pp$p
 p  p p pp p!%p
 p p   pp p" !p
 p p!p#

he crystalline lens, hardened and divided

´p 

A microscope (from the Greek: ʅɿʃʌʊʎ, mikrós, "small" and ʍʃʉʋɸ±ʆ, skeî, "to look" or
"see") is an instrument to see objects too small for the naked eye. The science of investigating small
objects using such an instrument is called microscopy. Microscopic means invisible to the eye unless
aided by a microscope. (Uses: Small sample observation, ÷otable experiments,Discovery of cells,
Inventor: Hans Lippershey & Zacharias Janssen)

p  

The optical microscope, often referred to as the "light microscope", is a type of microscope
which uses visible light and a system of lenses to magnify images of small samples. Optical
microscopes are the oldest and simplest of the microscopes. Digital microscopes are now available
which use a CCD camera to examine a sample, and the image is shown directly on a computer screen
without the need for optics such as eye-pieces. Other microscopic methods which do not use visible
light include scanning electron microscopy and transmission electron microscopy.
There are two basic configurations of the conventional optical microscope in use, the simple
(one lens) and compound (many lenses). Digital microscopes are based on an entirely different
system of collecting the reflected light from a sample.

A ï ï

 is a microscope that uses only one lens for magnification, and is the
original light microscope. Van Leeuwenhoek's microscopes consisted of a small, single converging
lens mounted on a brass plate, with a screw mechanism to hold the sample or specimen to be
examined. Demonstrations by British microscopist have images from such basic instruments. Though
now considered primitive, the use of a single, convex lens for viewing is still found in simple
magnification devices, such as the magnifying glass, and the loupe. Light microscopes are able to
view specimens in color, an important advantage when compared with electron microscopes,
especially for forensic analysis, where blood traces may be important, for example.

!

Basic optical transmission microscope elements(1 0's)

1.p ocular lens, or eyepiece 6.p object holder or stage


.p objective turret 7.p mirror or light (illuminator)
3.p objective lenses 8.p diaphragm and condenser
4.p coarse adjustment knob
5.p fine adjustment knob
All optical microscopes share the same basic components:

ëp The eyepiece - A cylinder containing two or more lenses to bring the image to focus for the
eye. The eyepiece is inserted into the top end of the body tube. Eyepieces are
interchangeable and many different eyepieces can be inserted with different degrees of
magnification. Typical magnification values for eyepieces include 5x, 10x and x. In some
high performance microscopes, the optical configuration of the objective lens and eyepiece
are matched to give the best possible optical performance. This occurs most commonly with
apochromatic objectives.
ëp The objective lens - a cylinder containing one or more lenses, typically made of glass, to
collect light from the sample. At the lower end of the microscope tube one or more
objective lenses are screwed into a circular nose piece which may be rotated to select the
required objective lens. Typical magnification values of objective lenses are 4x, 5x, 10x, 0x,
40x, 50x and 100x. Some high performance objective lenses may require matched eyepieces
to deliver the best optical performance.
ëp The stage - a platform below the objective which supports the specimen being viewed. In
the center of the stage is a hole through which light passes to illuminate the specimen. The
stage usually has arms to hold sli es (rectangular glass plates with typical dimensions of
5 mm by 75 mm, on which the specimen is mounted).
ëp The illumination source - below the stage, light is provided and controlled in a variety of
ways. At its simplest, daylight is directed via a mirror. Most microscopes, however, have
their own controllable light source that is focused through an optical device called a
condenser, with diaphragms and filters available to manage the quality and intensity of the
light.

The whole of the optical assembly is attached to a rigid arm which in turn is attached to a robust
U shaped foot to provide the necessary rigidity. The arm is usually able to pivot on its joint with the
foot to allow the viewing angle to be adjusted. Mounted on the arm are controls for focusing,
typically a large knurled wheel to adjust coarse focus, together with a smaller knurled wheel to
control fine focus.

Updated microscopes may have many more features, including reflected light (incident)
illumination, fluorescence microscopy, phase contrast microscopy and differential interference
contrast microscopy, spectroscopy, automation, and digital imaging.

On a typical compound optical microscope, there are three objective lenses: a scanning lens (4×),
low power lens (10×)and high power lens (ranging from 0 to 100×). Some microscopes have a
fourth objective lens, called an oil immersion lens. To use this lens, a drop of immersion oil is placed
on top of the cover slip, and the lens is very carefully lowered until the front objective element is
immersed in the oil film. Such immersion lenses are designed so that the refractive index of the oil
and of the cover slip are closely matched so that the light is transmitted from the specimen to the
outer face of the objective lens with minimal refraction. An oil immersion lens usually has a
magnification of 50 to 100×.

The actual power or magnification of an optical microscope is the product of the powers of the
ocular (eyepiece), usually about 10×, and the objective lens being used.

Compound optical microscopes can produce a magnified image of a specimen up to 1000× and,
at high magnifications, are used to study thin specimens as they have a very limited depth of field.

  

Optical path in a typical microscope


The optical components of a modern microscope are very complex and for a microscope to
work well, the whole optical path has to be very accurately set up and controlled. Despite this, the
basic operating principles of a microscope are quite simple.

The objective lens is, at its simplest, a very high powered magnifying glass  a lens with a
very short focal length. This is brought very close to the specimen being examined so that the light
from the specimen comes to a focus about 160 mm inside the microscope tube. This creates an
enlarged image of the subject. This image is inverted and can be seen by removing the eyepiece and
placing a piece of tracing paper over the end of the tube. By carefully focusing a brightly lit
specimen, a highly enlarged image can be seen. It is this real image that is viewed by the eyepiece
lens that provides further enlargement.

In most microscopes, the eyepiece is a compound lens, with one component lens near the
front and one near the back of the eyepiece tube. This forms an air-separated couplet. In many
designs, the virtual image comes to a focus between the two lenses of the eyepiece, the first lens
bringing the real image to a focus and the second lens enabling the eye to focus on the virtual image.

In all microscopes the image is viewed with the eyes focused at infinity (mind that the
position of the eye in the above figure is determined by the eye's focus). Headaches and tired eyes
after using a microscope are usually signs that the eye is being forced to focus at a close distance
rather than at infinity.

The essential principle of the microscope is that an objective lens with very short focal
length (often a few mm) is used to form a highly magnified real image of the object. Here, the
quantity of interest is linear magnification, and this number is generally inscribed on the objective
lens casing. In practice, today, this magnification is carried out by means of two lenses: the objective
lens which creates an image at infinity, and a second weak tube lens which then forms a real image
in its focal plane.[4]

  

Optical microscopy is used extensively in microelectronics, nanophysics, biotechnology,


pharmaceutic research and microbiology.[5]

Optical microscopy is used for medical diagnosis, the field being termed histopathology when
dealing with tissues, or in smear tests on free cells or tissue fragments.

p ||
   |

!
!! !
 
! 
 - Objective  - Galilean telescopes ( 
Stereo microscope 
 )  - Zoom control  - Internal
objective | - Prism  - Relay lens  - Reticle  -
Eyepiece

The !! or  ! 


 ! is designed differently from the diagrams above, and
serves a different purpose. It uses two separate optical paths with two objectives and two eyepieces
to provide slightly different viewing angles to the left and right eyes. In this way it produces a three-
dimensional visualization of the sample being examined.[6]

The stereo microscope is often used to study the surfaces of solid specimens or to carry out
close work such as sorting, dissection, microsurgery, watch-making, small circuit board manufacture
or inspection, and the like.

Unlike compound microscopes, illumination in a stereo microscope most often uses


reflected (episcopic) illumination rather than transmitted (diascopic) illumination, that is, light
reflected from the surface of an object rather than light transmitted through an object. Use of
reflected light from the object allows examination of specimens that would be too thick or otherwise
opaque for compound microscopy. However, stereo microscopes are also capable of transmitted
light illumination as well, typically by having a bulb or mirror beneath a transparent stage
underneath the object, though unlike a compound microscope, transmitted illumination is not
focused through a condenser in most systems.[7] Stereoscopes with specially-equipped illuminators
can be used for dark field microscopy, using either reflected or transmitted light.[8]
Scientist using a stereo microscope outfitted with a digital imaging pick-up

Great working distance and depth of field here are important qualities for this type of
microscope. Both qualities are inversely correlated with resolution: the higher the resolution ( the
shorter the distance at which two adjacent points can be distinguished as separate), the smaller the
depth of field and working distance. A stereo microscope has a useful magnification up to 100×. The
resolution is maximally in the order of an average 10× objective in a compound microscope, and
often much lower.

There are two major types of magnification systems in stereo microscopes. One is fixed
magnification in which primary magnification is achieved by a paired set of objective lenses with a
set degree of magnification. The other is zoom or pancratic magnification, which are capable of a
continuously variable degree of magnification across a set range. Zoom systems can achieve further
magnification through the use of auxiliary objectives that increase total magnification by a set factor.
Also, total magnification in both fixed and zoom systems can be varied by changing eyepieces.

Intermediate between fixed magnification and zoom magnification systems is a system


attributed to Galileo as the "Galilean optical system" ; here an arrangement of fixed-focus convex
lenses is used to provide a fixed magnification, but with the crucial distinction that the same optical
components in the same spacing will, if physically inverted, result in a different, though still fixed,
magnification. This allows one set of lenses to provide two different magnifications ; two sets of
lenses to provide four magnifications on one turret ; three sets of lenses provide six magnifications
and will still fit into one turret. Practical experience shows that such Galilean optics systems are as
useful as a considerably more expensive zoom system, with the advantage of knowing the
magnification in use as a set value without having to read analogue scales. (In remote locations, the
robustness of the systems is also a non-trivial advantage.)

The stereo microscope should not be confused with a compound microscope equipped with
double eyepieces and a binoviewer. In such a microscope both eyes see the same image, but the
binocular eyepieces provide greater viewing comfort. However, the image in such a microscope is no
different from that obtained with a single monocular eyepiece.

  
  
 
!!
 !

Recently various video dual CCD camera pickups have been fitted to stereo microscopes,
allowing the images to be displayed on a high resolution LCD monitor. Software converts the two
images to an integrated anaglyph 3D image, for viewing with plastic red/cyan glasses, or to the cross
converged process for clear glasses and somewhat better color accuracy. The results are viewable by
a group wearing the glassesðp

p   
   |

A miniature digital microscope.

Low power microscopy is also possible with digital microscopes, with a camera attached
directly to the USB port of a computer, so that the images are shown directly on the monitor. Often
called "USB" microscopes, they offer high magnifications (up to about 00×) without the need to use
eyepieces, and at very low cost. The precise magnification is determined by the working distance
between the camera and the object, and good supports are needed to control the image. The
images can be recorded and stored in the normal way on the computer. The camera is usually fitted
with a light source, although extra sources (such as a fiber-optic light) can be used to highlight
features of interest in the object. They also offer a large depth of field, a great advantage at high
magnifications.

They are most useful when examining flat objects such as coins, printed circuit boards, or
documents such as banknotes. However, they can be used for examining any object which can be
studied in a standard stereo-microscope. Such microscopes offer the great advantage of being much
less bulky than a conventional microscope, so can be used in the field, attached to a laptop
computer. Although convenient, the magnifying abilities of these instruments are often overstated;
typically offering 00x magnification, this claim is based usually on 5x to 30x actual magnification
PLUS the expansion of the image facilitated by the size of the available screen- so for genuine 00x
magnification a ten-foot screen would be required.
CHAPTER IV

CO÷CAVE A÷D CO÷VEX MIRRORS

p CO÷CAVE MIRRORS

A concave mirror diagram showing the focus, focal Length, centre of curvature, principal axis, etc.

A concave mirror, or converging mirror, has a reflecting surface that bulges inward (away
from the incident light). Concave mirrors reflect light inward to one focal point, therefore they are
used to focus light. Unlike convex mirrors, concave mirrors show different image types depending on
the distance between the object and the mirror.

These mirrors are called "converging" because they tend to collect light that falls on them,
refocusing parallel incoming rays toward a focus. This is because the light is reflected at different
angles, since the normal to the surface differs with each spot on the mirror.
p

Image

Effect on image of object's position relative to mirror focal point

Object's
position ( ), Image Diagram
focal point (F)
<F
(Object ëp Virtual
ëp Upright
between focal
ëp Magnified (larger)
point and
mirror)

ëp the image is formed at infinity.


=F
(Object at
(÷ote that the reflected light rays are
focal point) parallel and do not meet the others. In
this way, no image is formed or more
properly the image is formed at infinity.)

ëp Real
F < < F ëp Inverted (vertically)
ëp Magnified (larger)
= F
ëp Real
(Object at
ëp Inverted (vertically)
centre of ëp Same size
curvature)

ëp Real
> F ëp Inverted (vertically)
ëp Diminished (smaller)

Mirror shape

Most curved mirrors have a spherical profile. These are the simplest to make, and it is the best shape
for general-purpose use. Spherical mirrors, however, suffer from spherical aberration. Parallel rays
reflected from such mirrors do not focus to a single point. For parallel rays, such as those coming
from a very distant object, a parabolic reflector can do a better job. Such a mirror can focus
incoming parallel rays to a much smaller spot than a spherical mirror can.

p Mirror equation and magnification

The Gaussian mirror equation relates the object distance ( o) and image distances ( i) to the focal
length (f):

The magnification of a mirror is defined as the height of the image divided by the height of the
object:

.
The negative sign in this equation is used as a convention. By convention, if the magnification is
positive, the image is upright. If the magnification is negative, the image is inverted (upside down).

p Ray tracing

The image location and size can also be found by graphical ray tracing, as illustrated in the
figures above. A ray drawn from the top of the object to the surface vertex (where the optical axis
meets the mirror) will form an angle with that axis. The reflected ray has the same angle to the axis,
but is below it (See Specular reflection).

A second ray can be drawn from the top of the object passing through the focal point and
reflecting off the mirror at a point somewhere below the optical axis. Such a ray will be reflected
from the mirror as a ray parallel to the optical axis. The point at which the two rays described above
meet is the image point corresponding to the top of the object. Its distance from the axis defines the
height of the image, and its location along the axis is the image location. The mirror equation and
magnification equation can be derived geometrically by considering these two rays.

p Ray transfer matrix of spherical mirrors

The mathematical treatment is done under the paraxial approximation, meaning that under the
first approximation a spherical mirror is a parabolic reflector. The ray matrix of a spherical mirror is
shown here for the concave reflecting surface of a spherical mirror. The  element of the matrix is

, where f is the focal point of the optical device.


Boxes 1 and 3 feature summing the angles of a triangle and comparing to ʋ radians (or 180°). Box 

shows the Maclaurin series of up to order 1. The derivations of the ray matrices
of a convex spherical mirror and a thin lens are very similar.

p  

A convex mirror diagram showing the focus, focal Length, centre of curvature, principal axis, etc

A convex mirror, fish eye mirror or diverging mirror, is a curved mirror in which the reflective
surface bulges toward the light source. Convex mirrors reflect light outwards, therefore they are not
used to focus light. Such mirrors always form a virtual image, since the focus F and the centre of
curvature 2F are both imaginary points "inside" the mirror, which cannot be reached. Therefore
images formed by these mirrors cannot be taken on screen. (As they are inside the mirror)

A collimated (parallel) beam of light diverges (spreads out) after reflection from a convex
mirror, since the normal to the surface differs with each spot on the mirror.

Image

Convex mirror image formation

The image is always irt l (rays haven't actually passed though the image), imiishe
(smaller), and riht . These features make convex mirrors very useful: everything appears smaller
in the mirror, so they cover a wider field of view than a normal plane mirror does as the image is
"compressed".

Uses
Convex mirror lets motorists see around a corner.

The passenger-side mirror on a car is typically a convex mirror. In some countries, these are
labelled with the safety warning "Objects in mirror are closer than they appear", to warn the driver
of the convex mirror's distorting effects on distance perception.

Convex mirrors are used in some automated teller machines as a simple and handy security
feature, allowing the users to see what is happening behind them. Similar devices are sold to be
attached to ordinary computer monitors.

Some camera phones use convex mirrors to allow the user correctly aim the camera while
taking a self-portrait.
CHAPTER V

c 

|#

|   #
$

 |
 #

The Theory of Relativity, proposed by the Jewish physicist Albert Einstein (187 -1 55) in the
early part of the 0th century, is one of the most significant scientific advances of our time. Although
the concept of relativity was not introduced by Einstein, his major contribution was the recognition
that the speed of light in a vacuum is constant and an absolute physical boundary for motion. This
does not have a major impact on a person's day-to-day life since we travel at speeds much slower
than light speed. For objects travelling near light speed, however, the theory of relativity states that
objects will move slower and shorten in length from the point of view of an observer on Earth.
Einstein also derived the famous equation, | + ï
2 which reveals the equivalence of mass and
energy.

When Einstein applied his theory to gravitational fields, he derived the "curved space-time
continuum" which depicts the dimensions of space and time as a two-dimensional surface where
massive objects create valleys and dips in the surface. This aspect of relativity explained the
phenomena of light bending around the sun, predicted black holes as well as the Cosmic Microwave
Background Radiation (CMB) -- a discovery rendering fundamental anomalies in the classic Steady-
State hypothesis. For his work on relativity, the photoelectric effect and blackbody radiation,
Einstein received the Nobel Prize in 1 1.

p |
 
Physicists usually dichotomize the Theory of Relativity into two parts.

The first is the Special Theory of Relativity, which essentially deals with the question of whether
rest and motion are relative or absolute, and with the consequences of Einstein͛s conjecture that
they are relative.

The second is the General Theory of Relativity, which primarily applies to particles as they
accelerate, particularly due to gravitation, and acts as a radical revision of Newton͛s theory,
predicting important new results for fast-moving and/or very massive bodies. The General Theory of
Relativity correctly reproduces all validated predictions of Newton͛s theory, but expands on our
understanding of some of the key principles. Newtonian physics had previously hypothesised that
gravity operated through empty space, but the theory lacked explanatory power as far as how the
distance and mass of a given object could be transmitted through space. General relativity irons out
this paradox, for it shows that objects continue to move in a straight line in space-time, but we
observe the motion as acceleration because of the curved nature of space-time.

Einstein͛s theories of both special and general relativity have been confirmed to be accurate to
avery high degree over recent years, and the data has been shown to corroborate many key
predictions; the most famous being the solar eclipse of 1 1 bearing testimony that the light of stars
is indeed deflected by the sun as the light passes near the sun on its way to earth. The total solar
eclipse allowed astronomers to -- for the first time -- analyse starlight near the edge of the sun,
which had been previously inaccessible to observers due to the intense brightness of the sun. It also
predicted the rate at which two neutron stars orbiting one another will move toward each other.
When this phenomenon was first documented, general relativity proved itself accurate to better
than a trillionth of a percent precision, thus making it one of the best confirmed principles in all of
physics.
Applying the principle of general relativity to our cosmos reveals that it is not static. Edwin
Hubble (188 -1 53) demonstrated in 1 8 that the Universe is expanding, showing beyond
reasonable doubt that the Universe sprang into being a finite time ago. The most common
contemporary interpretation of this expansion is that this began to exist from the moment of the Big
Bang some 13.7 billion years ago. However this is not the only plausible cosmological model which
exists in academia, and many creation physicists such as Russell Humphreys and John Hartnett have
devised models operating with a biblical framework, which -- to date -- have withstood the test of
criticism from the most vehement of opponents.

p 
!!

! 

Using the observed cosmic expansion conjunctively with the general theory of relativity, we
can infer from the data that the further back into time one looks, the universe ought to diminish in
size accordingly. However, this cannot be extrapolated indefinitely. The universe͛s expansion helps
us to appreciate the direction in which time flows. This is referred to as the Cosmological arrow of
time, and implies that the future is -- by definition -- the direction towards which the universe
increases in size. The expansion of the universe also gives rise to the second law of thermodynamics,
which states that the overall entropy (or disorder) in the Universe can only increase with time
because the amount of energy available for work deteriorates with time. If the universe was eternal,
therefore, the amount of usable energy available for work would have already been exhausted.
Hence it follows that at one point the entropy value was at absolute 0 (most ordered state at the
moment of creation) and the entropy has been increasing ever since -- that is, the universe at one
point was fully ͞wound up͟ and has been winding down ever since. This has profound theological
implications, for it shows that time itself is necessarily finite. If the universe were eternal, the
thermal energy in the universe would have been evenly distributed throughout the cosmos, leaving
each region of the cosmos at uniform temperature (at very close to absolute 0), rendering no further
work possible.

The General Theory of Relativity demonstrates that time is linked, or related, to matter and
space, and thus the dimensions of time, space, and matter constitute what we would call a
continuum. They must come into being at precisely the same instant. Time itself cannot exist in the
absence of matter and space. From this, we can infer that the uncaused first cause must exist
outside of the four dimensions of space and time, and possess eternal, personal, and intelligent
qualities in order to possess the capabilities of intentionally space, matter -- and indeed even time
itself -- into being.

Moreover, the very physical nature of time and space also suggest a Creator, for infinity and
eternity must necessarily exist from a logical perspective. The existence of time implies eternity (as
time has a beginning and an end), and the existence of space implies infinity. The very concepts of
infinity and eternity infer a Creator because they find their very state of being in God, who
transcends both and 212 !3p2ðpp
Ô 


 
 Ô  Ô 

p   
| 

  
# 

Predictions of quantum mechanics have been verified experimentally to a very high degree
of accuracy. Thus, the current logic of correspondence principle between classical and quantum
mechanics is that all objects obey laws of quantum mechanics, and classical mechanics is just a
quantum mechanics of large systems (or a statistical quantum mechanics of a large collection of
particles). Laws of classical mechanics thus follow from laws of quantum mechanics at the limit of
large systems or large quantum numbers.[10] However, chaotic systems do not have good quantum
numbers, and quantum chaos studies the relationship between classical and quantum descriptions
in these systems.

The main differences between classical and quantum theories have already been mentioned
above in the remarks on the Einstein-Podolsky-Rosen paradox. Essentially the difference boils down
to the statement that quantum mechanics is coherent (addition of amplitudes), whereas classical
theories are incoherent (addition of intensities). Thus, such quantities as coherence lengths and
coherence times come into play. For microscopic bodies the extension of the system is certainly
much smaller than the coherence length; for macroscopic bodies one expects that it should be the
other way round.[11] An exception to this rule can occur at extremely low temperatures, when
quantum behavior can manifest itself on more macroscopic scales (see Bose-Einstein condensate).

This is in accordance with the following observations:

Many macroscopic properties of classical systems are direct consequences of quantum


behavior of its parts. For example, the stability of bulk matter (which consists of atoms and
molecules which would quickly collapse under electric forces alone), the rigidity of solids, and the
mechanical, thermal, chemical, optical and magnetic properties of matter are all results of
interaction of electric charges under the rules of quantum mechanics.[1]

While the seemingly exotic behavior of matter posited by quantum mechanics and relativity
theory become more apparent when dealing with extremely fast-moving or extremely tiny particles,
the laws of classical Newtonian physics remain accurate in predicting the behavior of large objectsͶ
of the order of the size of large molecules and biggerͶat velocities much smaller than the velocity of
light.[13]

Theory

There are numerous mathematically equivalent formulations of quantum mechanics. One of


the oldest and most commonly used formulations is the transformation theory proposed by
Cambridge theoretical physicist Paul Dirac, which unifies and generalizes the two earliest
formulations of quantum mechanics, matrix mechanics (invented by Werner Heisenberg)[14][15] and
wave mechanics (invented by Erwin Schrödinger).[16]

In this formulation, the instantaneous state of a quantum system encodes the probabilities of its
measurable properties, or "observables". Examples of observables include energy, position,
momentum, and angular momentum. Observables can be either continuous (e.g., the position of a
particle) or discrete (e.g., the energy of an electron bound to a hydrogen atom).[17] Generally,
quantum mechanics does not assign definite values to observables. Instead, it makes predictions
using probability distributions that is, the probability of obtaining possible outcomes from
measuring an observable. Oftentimes these results are skewed by many causes, such as dense
probability clouds[18] or quantum state nuclear attraction.[1 ][0] ÷aturally, these probabilities will
depend on the quantum state at the "instant" of the measurement. Hence, uncertainty is involved in
the value. There are, however, certain states that are associated with a definite value of a particular
observable. These are known as eigenstates of the observable ("eigen" can be translated from
German as inherent or as a characteristic).[1] In the everyday world, it is natural and intuitive to
think of everything (every observable) as being in an eigenstate. Everything appears to have a
definite position, a definite momentum, a definite energy, and a definite time of occurrence.
However, quantum mechanics does not pinpoint the exact values of a particle for its position and
momentum (since they are conjugate pairs) or its energy and time (since they too are conjugate
pairs) rather, it only provides a range of probabilities of where that particle might be given its
momentum and momentum probability. Therefore, it is helpful to use different words to describe
states having uncertain values and states having definite values (eigenstate).

3D confined electron wave functions for each eigenstate in a Quantum Dot. Here, rectangular and
4
triangular-shaped quantum dots are shown. Energy states in rectangular dots are more ͚s-type and
4
͚p-type . However, in a triangular dot the wave functions are mixed due to confinement symmetry.

For example, consider a free particle. In quantum mechanics, there is wave-particle duality so the
properties of the particle can be described as the properties of a wave. Therefore, its quantum state
can be represented as a wave of arbitrary shape and extending over space as a wave function. The
position and momentum of the particle are observables. The Uncertainty Principle states that both
the position and the momentum cannot simultaneously be measured with full precision at the same
time. However, one can measure the position alone of a moving free particle creating an eigenstate
of position with a wavefunction that is very large (a Dirac delta) at a particular position x and zero
everywhere else. If one performs a position measurement on such a wavefunction, the resultx will
be obtained with 100% probability (full certainty). This is called an eigenstate of position
(mathematically more precise: a generalized position eigenstate (eigendistribution)). If the particle is
in an eigenstate of position then its momentum is completely unknown. On the other hand, if the
particle is in an eigenstate of momentum then its position is completely unknown.[] In an
eigenstate of momentum having a plane wave form, it can be shown that the wavelength is equal to
h/p, where h is Planck's constant and p is the momentum of the eigenstate.[3]

Usually, a system will not be in an eigenstate of the observable we are interested in. However, if one
measures the observable, the wavefunction will instantaneously be an eigenstate (or generalized
eigenstate) of that observable. This process is known as wavefunction collapse, a debatable
process.[4] It involves expanding the system under study to include the measurement device. If one
knows the corresponding wave function at the instant before the measurement, one will be able to
compute the probability of collapsing into each of the possible eigenstates. For example, the free
particle in the previous example will usually have a wavefunction that is a wave packet centered
around some mean position x0, neither an eigenstate of position nor of momentum. When one
measures the position of the particle, it is impossible to predict with certainty the result.[5] It is
probable, but not certain, that it will be near x0, where the amplitude of the wave function is large.
After the measurement is performed, having obtained some result x, the wave function collapses
into a position eigenstate centered at x.[6]

Wave functions can change as time progresses. An equation known as the Schrödinger equation
describes how wave functions change in time, a role similar to Newton's second law in classical
mechanics. The Schrödinger equation, applied to the aforementioned example of the free particle,
predicts that the center of a wave packet will move through space at a constant velocity, like a
classical particle with no forces acting on it. However, the wave packet will also spread out as time
progresses, which means that the position becomes more uncertain. This also has the effect of
turning position eigenstates (which can be thought of as infinitely sharp wave packets) into
broadened wave packets that are no longer position eigenstates. Some wave functions produce
probability distributions that are constant or independent of time, such as when in a stationary state
of constant energy, time drops out of the absolute square of the wave function. Many systems that
are treated dynamically in classical mechanics are described by such "static" wave functions. For
example, a single electron in an unexcited atom is pictured classically as a particle moving in a
circular trajectory around the atomic nucleus, whereas in quantum mechanics it is described by a
static, spherically symmetric wavefunction surrounding the nucleus (Fig. 1). (Note that only the
lowest angular momentum states, labeled s, are spherically symmetric).

The time evolution of wave functions is deterministic in the sense that, given a wavefunction at an
initial time, it makes a definite prediction of what the wavefunction will be at any later time. During
a measurement, the change of the wavefunction into another one is not deterministic, but rather
unpredictable, i.e., random. A time-evolution simulation can be seen here.

The probabilistic nature of quantum mechanics thus stems from the act of measurement. This is one
of the most difficult aspects of quantum systems to understand. It was the central topic in the
famous Bohr-Einstein debates, in which the two scientists attempted to clarify these fundamental
principles by way of thought experiments. In the decades after the formulation of quantum
mechanics, the question of what constitutes a "measurement" has been extensively studied.
Interpretations of quantum mechanics have been formulated to do away with the concept of
"wavefunction collapse"; see, for example, the relative state interpretation. The basic idea is that
when a quantum system interacts with a measuring apparatus, their respective wavefunctions
become entangled, so that the original quantum system ceases to exist as an independent entity.
For details, see the article on measurement in quantum mechanics.

p | 
  

In the mathematically rigorous formulation of quantum mechanics, developed by Paul Dirac[31]


and John von Neumann[3], the possible states of a quantum mechanical system are represented by
unit vectors (called "state vectors") residing in a complex separable Hilbert space (variously called
the "state space" or the "associated Hilbert space" of the system) well defined up to a complex
number of norm 1 (the phase factor). In other words, the possible states are points in the
projectivization of a Hilbert space, usually called the complex projective space. The exact nature of
this Hilbert space is dependent on the system; for example, the state space for position and
momentum states is the space of square-integrable functions, while the state space for the spin of a
single proton is just the product of two complex planes. Each observable is represented by a
maximally-Hermitian (precisely: by a self-adjoint) linear operator acting on the state space. Each
eigenstate of an observable corresponds to an eigenvector of the operator, and the associated
eigenvalue corresponds to the value of the observable in that eigenstate. If the operator's spectrum
is discrete, the observable can only attain those discrete eigenvalues.

The time evolution of a quantum state is described by the Schrödinger equation, in which the
Hamiltonian, the operator corresponding to the total energy of the system, generates time
evolution.

The inner product between two state vectors is a complex number known as a probability
amplitude. During a measurement, the probability that a system collapses from a given initial state
to a particular eigenstate is given by the square of the absolute value of the probability amplitudes
between the initial and final states. The possible results of a measurement are the eigenvalues of the
operator - which explains the choice of Hermitian operators, for which all the eigenvalues are real.
We can find the probability distribution of an observable in a given state by computing the spectral
decomposition of the corresponding operator. Heisenberg's uncertainty principle is represented by
the statement that the operators corresponding to certain observables do not commute.

The Schrödinger equation acts on the entire probability amplitude, not merely its absolute value.
Whereas the absolute value of the probability amplitude encodes information about probabilities, its
phase encodes information about the interference between quantum states. This gives rise to the
wave-like behavior of quantum states.

It turns out that analytic solutions of Schrödinger's equation are only available for a small
number of model Hamiltonians, of which the quantum harmonic oscillator, the particle in a box, the
hydrogen molecular ion and the hydrogen atom are the most important representatives. Even the
helium atom, which contains just one more electron than hydrogen, defies all attempts at a fully
analytic treatment. There exist several techniques for generating approximate solutions. For
instance, in the method known as perturbation theory one uses the analytic results for a simple
quantum mechanical model to generate results for a more complicated model related to the simple
model by, for example, the addition of a weak potential energy. Another method is the "semi-
classical equation of motion" approach, which applies to systems for which quantum mechanics
produces weak deviations from classical behavior. The deviations can be calculated based on the
classical motion. This approach is important for the field of quantum chaos.

An alternative formulation of quantum mechanics is Feynman's path integral formulation, in


which a quantum-mechanical amplitude is considered as a sum over histories between initial and
final states; this is the quantum-mechanical counterpart of action principles in classical mechanics.

p | 
 
|
 |  
| |

The fundamental rules of quantum mechanics are very deep. They assert that the state space of
a system is a Hilbert space and the observables are Hermitian operators acting on that space, but do
not tell us which Hilbert space or which operators, or if it even exists. These must be chosen
appropriately in order to obtain a quantitative description of a quantum system. An important guide
for making these choices is the correspondence principle, which states that the predictions of
quantum mechanics reduce to those of classical physics when a system moves to higher energies or
equivalently, larger quantum numbers. In other words, classical mechanics is simply a quantum
mechanics of large systems. This "high energy" limit is known as the classical or correspondence
limit. One can therefore start from an established classical model of a particular system, and attempt
to guess the underlying quantum model that gives rise to the classical model in the correspondence
limit.

Unsolved problems in physics

In the correspondence
limit of quantum
mechanics: Is there a
preferred interpretation
of quantum mechanics?
How does the quantum
description of reality,
which includes elements
such as the
"superposition of
states" and
"wavefunction
collapse", give rise to
the reality we perceive?

When quantum mechanics was originally formulated, it was applied to models whose
correspondence limit was non-relativistic classical mechanics. For instance, the well-known model of
the quantum harmonic oscillator uses an explicitly non-relativistic expression for the kinetic energy
of the oscillator, and is thus a quantum version of the classical harmonic oscillator.

Early attempts to merge quantum mechanics with special relativity involved the
replacement of the Schrödinger equation with a covariant equation such as the Klein-Gordon
equation or the Dirac equation. While these theories were successful in explaining many
experimental results, they had certain unsatisfactory qualities stemming from their neglect of the
relativistic creation and annihilation of particles. A fully relativistic quantum theory required the
development of quantum field theory, which applies quantization to a field rather than a fixed set of
particles. The first complete quantum field theory, quantum electrodynamics, provides a fully
quantum description of the electromagnetic interaction.

The full apparatus of quantum field theory is often unnecessary for describing
electrodynamic systems. A simpler approach, one employed since the inception of quantum
mechanics, is to treat charged particles as quantum mechanical objects being acted on by a classical
electromagnetic field. For example, the elementary quantum model of the hydrogen atom describes
the electric field of the hydrogen atom using a classical Coulomb potential. This "semi-
classical" approach fails if quantum fluctuations in the electromagnetic field play an important role,
such as in the emission of photons by charged particles.

Quantum field theories for the strong nuclear force and the weak nuclear force have been
developed. The quantum field theory of the strong nuclear force is called quantum
chromodynamics, and describes the interactions of the subnuclear particles: quarks and gluons. The
weak nuclear force and the electromagnetic force were unified, in their quantized forms, into a
single quantum field theory known as electroweak theory, by the physicists Abdus Salam, Sheldon
Glashow and Steven Weinberg. These three men shared the Nobel Prize in Physics in 1 7 for this
work.[33]

It has proven difficult to construct quantum models of gravity, the remaining fundamental
force. Semi-classical approximations are workable, and have led to predictions such as Hawking
radiation. However, the formulation of a complete theory of quantum gravity is hindered by
apparent incompatibilities between general relativity, the most accurate theory of gravity currently
known, and some of the fundamental assumptions of quantum theory. The resolution of these
incompatibilities is an area of active research, and theories such as string theory are among the
possible candidates for a future theory of quantum gravity.

In the 1st century classical mechanics has been extended into the complex domain and
complex classical mechanics exhibits behaviours very similar to quantum mechanics.[34]

Example

The particle in a 1-dimensional potential energy box is the most simple example where
restraints lead to the quantization of energy levels. The box is defined as zero potential energy inside
a certain interval and infinite everywhere outside that interval. For the 1-dimensional case in the x
direction, the time-independent Schrödinger equation can be written as:[35]

The general solutions are:

or, from Euler's formula,

The presence of the walls of the box determines the values of C, D, and k. At each wall (x = 0 and x =
L), ʗ = 0. Thus when x = 0,

and so D = 0. When x = L,
C cannot be zero, since this would conflict with the Born interpretation. Therefore sin'kL = 0, and so
it must be that kL is an integer multiple of ʋ. Therefore,

The quantization of energy levels follows from this constraint on k, since

p | 


  |
 | 
|#

As of 010 the quest for unifying the fundamental forces through quantum mechanics is still
ongoing. Quantum electrodynamics (or "quantum electromagnetism"), which is currently the most
accurately tested physical theory,[36] has been successfully merged with the weak nuclear force into
the electroweak force and work is currently being done to merge the electroweak and strong force
into the electrostrong force. Current predictions state that at around 1014 GeV the three
aforementioned forces are fused into a single unified field, Beyond this "grand unification", it is
speculated that it may be possible to merge gravity with the other three gauge symmetries,
expected to occur at roughly 101 GeV. However - and while special relativity is parsimoniously
incorporated into quantum electrodynamics - the expanded general relativity, currently the best
theory describing the gravitation force, has not been fully incorporated into quantum theory.

p |   #

  
| 

Even with the defining postulates of both Einstein's theory of general relativity and quantum
theory being indisputably supported by rigorous and repeated empirical evidence and while they do
not directly contradict each other theoretically (at least with regard to primary claims), they are
resistant to being incorporated within one cohesive model.[38]

Einstein himself is well known for rejecting some of the claims of quantum mechanics. While
clearly contributing to the field, he did not accept the more philosophical consequences and
interpretations of quantum mechanics, such as the lack of deterministic causality and the assertion
that a single subatomic particle can occupy numerous areas of space at one time. He also was the
first to notice some of the apparently exotic consequences of entanglement and used them to
formulate the Einstein-Podolsky-Rosen paradox, in the hope of showing that quantum mechanics
had unacceptable implications. This was 1 35, but in 1 64 it was shown by John Bell (see Bell
inequality) that Einstein's assumption was correct, but had to be completed by hidden variables and
thus based on wrong philosophical assumptions. According to the paper of J. Bell and the
Copenhagen interpretation (the common interpretation of quantum mechanics by physicists since
1 7), and contrary to Einstein's ideas, quantum mechanics was

ëp neither a "realistic" theory (since quantum measurements do not state pre-existing


properties, but rather they prepare properties)
ëp nor a local theory (essentially not, because the state vector determines simultaneously
the probability amplitudes at all sites, ).

The Einstein-Podolsky-Rosen paradox shows in any case that there exist experiments by which
one can measure the state of one particle and instantaneously change the state of its entangled
partner, although the two particles can be an arbitrary distance apart; however, this effect does not
violate causality, since no transfer of information happens. These experiments are the basis of some
of the most topical applications of the theory, quantum cryptography, which has been on the market
since 004 and works well, although at small distances of typically 1000 km.

Gravity is negligible in many areas of particle physics, so that unification between general
relativity and quantum mechanics is not an urgent issue in those applications. However, the lack of a
correct theory of quantum gravity is an important issue in cosmology and physicists' search for an
elegant "theory of everything". Thus, resolving the inconsistencies between both theories has been a
major goal of twentieth- and twenty-first-century physics. Many prominent physicists, including
Stephen Hawking, have labored in the attempt to discover a theory underlying everything,
combining not only different models of subatomic physics, but also deriving the universe's four
forces Ͷthe strong force, electromagnetism, weak force, and gravityͶ from a single force or
phenomenon. One of the leaders in this field is Edward Witten, a theoretical physicist who
formulated the groundbreaking M-theory, which is an attempt at describing the supersymmetrical
based string theory.

S-ar putea să vă placă și