Sunteți pe pagina 1din 67

PDF generated using the open source mwlib toolkit. See http://code.pediapress.com/ for more information.

PDF generated at: Wed, 21 Aug 2013 16:46:53 UTC


Photography Techniques
Advanced Skills
Contents
Articles
Zone System 1
High-dynamic-range imaging 8
Contre-jour 18
Night photography 20
Multiple exposure 25
Camera obscura 28
Pinhole camera 33
Stereoscopy 38
References
Article Sources and Contributors 60
Image Sources, Licenses and Contributors 62
Article Licenses
License 65
Zone System
1
Zone System
The Zone System is a photographic technique for determining optimal film exposure and development, formulated
by Ansel Adams and Fred Archer.
[1]
Adams described how the Zone System as "[...] not an invention of mine; it is a
codification of the principles of sensitometry, worked out by Fred Archer and myself at the Art Center School in Los
Angeles, around 1939-40."
[2]
The technique is based on the late 19th century sensitometry studies of Hurter and Driffield. The Zone System
provides photographers with a systematic method of precisely defining the relationship between the way they
visualize the photographic subject and the final results. Although it originated with black-and-white sheet film, the
Zone System is also applicable to roll film, both black-and-white and color, negative and reversal, and to digital
photography.
Principles
Visualization
An expressive image involves the arrangement and rendering of various scene elements according to photographers
desire. Achieving the desired image involves image management (placement of the camera, choice of lens, and
possibly the use of camera movements) and control of image values. The Zone System is concerned with control of
image values, ensuring that light and dark values are rendered as desired. Anticipation of the final result before
making the exposure is known as visualization.
Exposure metering
Any scene of photographic interest contains elements of different luminance; consequently, the exposure actually
is many different exposures. The exposure time is the same for all elements, but the image illuminance varies with
the luminance of each subject element.
Exposure is often determined using a reflected-light
[3]
exposure meter. The earliest meters measured overall average
luminance; meter calibration was established to give satisfactory exposures for typical outdoor scenes. However, if
the part of a scene that is metered includes large areas of unusually high or low reflectance, or unusually large areas
of highlight or shadow, the effective average reflectance
[4]
may differ substantially from that of a typical scene,
and the rendering may not be as desired.
An averaging meter cannot distinguish between a subject of uniform luminance and one that consists of light and
dark elements. When exposure is determined from average luminance measurements, the exposure of any given
scene element depends on the relationship of its reflectance to the effective average reflectance. For example, a dark
object of 4% reflectance would be given a different exposure in a scene of 20% effective average reflectance than it
would be given in a scene of 12% reflectance. In a sunlit outdoor scene, the exposure for the dark object would also
depend on whether the object was in sunlight or shade. Depending on the scene and the photographers objective,
any of the previous exposures might be acceptable. However, in some situations, the photographer might wish to
specifically control the rendering of the dark object; with overall average metering, this is difficult if not impossible.
When it is important to control the rendering of specific scene elements, alternative metering techniques may be
required.
It is possible to make a meter reading of an individual scene element, but the exposure indicated by the meter will
render that element as a medium gray; in the case of a dark object, that result is usually not what is desired. Even
when metering individual scene elements, some adjustment of the indicated exposure is often needed if the metered
scene element is to be rendered as visualized.
Zone System
2
Exposure zones
In the Zone System, measurements are made of individual scene elements, and exposure is adjusted based on the
photographers knowledge of what is being metered: a photographer knows the difference between freshly fallen
snow and a black horse, while a meter does not. Much has been written on the Zone System, but the concept is very
simplerender light subjects as light, and dark subjects as dark, according to the photographers visualization. The
Zone System assigns numbers from 0 through 10
[5]
to different brightness values, with 0 representing black, 5
middle gray, and 10 pure white; these values are known as zones. To make zones easily distinguishable from other
quantities, Adams and Archer used Roman rather than Arabic numerals. Strictly speaking, zones refer to exposure,
[6]
with a ZoneV exposure (the meter indication) resulting in a mid-tone rendering in the final image. Each zone differs
from the preceding or following zone by a factor of two, so that a ZoneI exposure is twice that of Zone0, and so
forth. A one-zone change is equal to one stop,
[7]
corresponding to standard aperture and shutter controls on a camera.
Evaluating a scene is particularly easy with a meter that indicates in exposure value (EV), because a change of one
EV is equal to a change of one zone.
Many small- and medium-format cameras include provision for exposure compensation; this feature works well with
the Zone System, especially if the camera includes spot metering, but obtaining proper results requires careful
metering of individual scene elements and making appropriate adjustments.
Zones, the physical world and the print
The relationship between the physical scene and the print is established by characteristics of the negative and the
print. Exposure and development of the negative are usually determined so that a properly exposed negative will
yield an acceptable print on a specific photographic paper.
Although zones directly relate to exposure, visualization relates to the final result. A black-and-white photographic
print represents the visual world as a series of tones ranging from black to white. Imagine all of the tonal values that
can appear in a print, represented as a continuous gradation from black to white:
Full Tonal Gradation
From this starting point, zones are formed by:
Dividing the tonal gradation into eleven equal sections.
Eleven-Step Gradation
Note: You may need to adjust the brightness and contrast of your monitor to see the gradations at the dark
and light end of the scales.
Blending each section into one tone that represents all the tonal values in that section.
Numbering each section with Roman numerals from 0 for the black section to X for the white one.
Zone System
4
Exposure
A dark surface under a bright light can reflect the same amount of light as a light surface under dim light. The human
eye would perceive the two as being very different but a light meter would measure only the amount of light
reflected, and its recommended exposure would render either as ZoneV. The Zone System provides a
straightforward method for rendering these objects as the photographer desires. The key element in the scene is
identified, and that element is placed on the desired zone; the other elements in the scene then fall where they may.
With negative film, exposure often favors shadow detail; the procedure then is to
1. 1. Visualize the darkest area of the subject in which detail is required, and place it on ZoneIII. The exposure for
ZoneIII is important, because if the exposure is insufficient, the image may not have satisfactory shadow detail.
If the shadow detail is not recorded at the time of exposure, nothing can be done to add it later.
2. Carefully meter the area visualized as Zone III and note the meters recommended exposure (the meter gives a
Zone V exposure).
3. Adjust the recommended exposure so that the area is placed on ZoneIII rather than ZoneV. To do this, use an
exposure two stops less than the meters recommendation.
Development
For every combination of film, developer, and paper there is a normal development time that will allow a properly
exposed negative to give a reasonable print. In many cases, this means that values in the print will display as
recorded (e.g., ZoneV as ZoneV, ZoneVI as ZoneVI, and so on). In general, optimal negative development will be
different for every type and grade of paper.
It is often desirable for a print to exhibit a full range of tonal values; this may not be possible for a low-contrast scene
if the negative is given normal development. However, the development can be increased to increase the negative
contrast so that the full range of tones is available. This technique is known as expansion, and the development
usually referred to as plus or N+. Criteria for plus development vary among different photographers; Adams used
it to raise a ZoneVII placement to ZoneVIII in the print, and referred to it as N+1 development.
Conversely, if the negative for a high-contrast scene is given normal development, desired detail may be lost in
either shadow or highlight areas, and the result may appear harsh. However, development can be reduced so that a
scene element placed on ZoneIX is rendered as ZoneVIII in the print; this technique is known as contraction, and
the development usually referred to as minus or N. When the resulting change is one zone, it is usually called
N1 development.
It sometimes is possible to make greater adjustments, using N+2 or N2 development, and occasionally even
beyond.
Development has the greatest effect on dense areas of the negative, so that the high values can be adjusted with
minimal effect on the low values. The effect of expansion or contraction gradually decreases with tones darker than
ZoneVIII (or whatever value is used for control of high values).
Specific times for N+ or N developments are determined either from systematic tests, or from development tables
provided by certain Zone System books.
Zone System
5
Additional darkroom processes
Adams generally used selenium toning when processing prints. Selenium toner acts as a preservative and can alter
the color of a print, but Adams used it subtly, primarily because it can add almost a full zone to the tonal range of the
final print, producing richer dark tones that still hold shadow detail. His book The Print described using the
techniques of dodging and burning to selectively darken or lighten areas of the final print.
The Zone System requires that every variable in photography, from exposure to darkroom production of the print, be
calibrated and controlled. The print is the last link in a chain of events, no less important to the Zone System than
exposure and development of the film. With practice, the photographer visualizes the final print before the shutter is
released.
Application to other media
Roll film
Unlike sheet film, in which each negative can be individually developed, an entire roll must be given the same
development, so that N+ and N development are normally unavailable.
[10]
The key element in the scene is placed
on the desired zone, and the rest of the scene falls where it will. Some contrast control is still available with the use
of different paper grades. Adams (1981, 9395) described use of the Zone System with roll film. In most cases, he
recommended N1 development when a single roll was to be exposed under conditions of varying contrast, so that
exposure could be sufficient to give adequate shadow detail but avoid excessive density and grain build-up in the
highlights.
Color film
Because of color shifts, color film usually does not lend itself to variations in development time. Use of the Zone
System with color film is similar to that with black-and-white roll film, except that the exposure range is somewhat
less, so that there are fewer zones between black and white. The exposure scale of color reversal film is less than that
of color negative film, and the procedure for exposure usually is different, favoring highlights rather than shadows;
the shadow values then fall where they will. Whatever the exposure range, the meter indication results in a ZoneV
placement. Adams (1981, 9597) described the application to color film, both negative and reversal.
Digital photography
The Zone System can be used in digital photography just as in film photography; Adams (1981, xiii) himself
anticipated the digital image. As with color reversal film, the normal procedure is to expose for the highlights and
process for the shadows.
Until recently, digital sensors had a much narrower dynamic range than color film, which, in turn, has less range than
monochrome film. But an increasing number of digital cameras have wider dynamic ranges. One of the first was
Fujifilms FinePix S3 Pro digital SLR, which has their proprietary Super CCD SR sensor specifically developed to
overcome the issue of limited dynamic range, using interstitial low-sensitivity photosites (pixels) to capture highlight
details.
[citation needed]
The CCD is thus able to expose at both low and high sensitivities within one shot by assigning
a honeycomb of pixels to different intensities of light.
Greater scene contrast can be accommodated by making one or more exposures of the same scene using different
exposure settings and then combining those images. It often suffices to make two exposures, one for the shadows,
and one for the highlights; the images are then overlapped and blended appropriately[11], so that the resulting
composite represents a wider range of colors and tones. Combining images is often easier if the image-editing
software includes features, such as the automatic layer alignment in Adobe Photoshop CS3, that assist precise
registration of multiple images. Even greater scene contrast can be handled by using more than two exposures and
Zone System
6
combining with a feature such as Merge to HDR in Photoshop CS2 and later.
The tonal range of the final image depends on the characteristics of the display medium. Monitor contrast can vary
significantly, depending on the type (CRT, LCD, etc.), model, and calibration (or lack thereof). A computer printers
tonal output depends on the number of inks used and the paper on which it is printed. Similarly, the density range of
a traditional photographic print depends on the processes used as well as the paper characteristics.
Histograms
Most high-end digital cameras allow viewing a histogram of the tonal distribution of the captured image. This
histogram, which shows the concentration of tones, running from dark on the left to light on the right, can be used to
judge whether a full tonal range has been captured, or whether the exposure should be adjusted, such as by changing
the exposure time, lens aperture, or ISO speed, to ensure a tonally rich starting image.
[12]
Misconceptions and criticisms
The Zone System gained an early reputation for being complex, difficult to understand, and impractical to apply to
real-life shooting situations and equipment. Noted photographer Andreas Feininger wrote in 1976,
I deliberately omitted discussing the so-called Zone System of film exposure determination in this book
because in my opinion it makes mountains out of molehills, complicates matters out of all proportions, does
not produce any results that cannot be accomplished more easily with methods discussed in this text, and is a
ritual if not a form of cult rather than a practical technical procedure.
[13]
Much of the difficulty may have resulted from Adamss early books, which he wrote without the assistance of a
professional editor; he later conceded (Adams 1985, 325) that this was a mistake. Picker (1974) provided a concise
and simple treatment that helped demystify the process. Adamss later Photography Series published in the early
1980s (and written with the assistance of Robert Baker) also proved far more comprehensible to the average
photographer.
The Zone System has often been thought to apply only to certain materials, such as black-and-white sheet film and
black-and-white photographic prints. Adams (1981, xii) suggested that when new materials become available, the
Zone System is adapted rather than discarded. He anticipated the digital age, stating
I believe the electronic image will be the next major advance. Such systems will have their own inherent and
inescapable structural characteristics, and the artist and functional practitioner will again strive to comprehend
and control them.
Yet another misconception is that the Zone System emphasizes technique at the expense of creativity. Some
practitioners have treated the Zone System as if it were an end in itself, but Adams made it clear that the Zone
System was an enabling technique rather than the ultimate objective.
Notes
[3] Adams (1981, 30) considered the incident-light meter, which measures light falling on the subject, to be of limited usefulness because it takes
no account of the specific subject luminances that actually produce the image.
[4] A typical scene includes areas of highlight and shadow, and has scene elements at various angles to the light source, so it usually is possible
to use the term average reflectance only loosely. Here, effective average reflectance is used to include these additional effects.
[5] [5] Adams (1981) designated 11 zones; other photographers, including Picker (1974) and White, Zakia, and Lorenz (1976) used 10 zones. Either
approach is workable if the photographer is consistent in her methods.
[6] Adams (1981) distinguished among exposure zones, negative density values, and print values. The negative density value is controlled by
exposure and the negative development; the print value is controlled by the negative density value, and the paper exposure and development.
Commonly, zone is also used, if somewhat loosely, to refer to negative density values and print values.
[7] Photographers commonly refer to exposure changes in terms of stops, but properly, a stop is a device that regulates the amount of light,
while a step is a division of a scale. The standard exposure scale consists of power-of-two steps; a one-step exposure increase doubles the
exposure, while a one-step decrease halves the exposure. Davis (1999, 13) recommended the term stop to avoid confusion with the steps of a
photographic step tablet, which may not correspond to standard power-of-two exposure steps. ISO standards generally use step.
Zone System
7
[8] Adamss description of zones and their application to typical scene elements was somewhat more extensive than the table in this article. The
application of ZoneIX to glaring snow is from Adams (1948).
[9] The effective speed determined for a given combination of film and developer is sometimes described as an Exposure Index (EI), but an
EI often represents a fairly arbitrary choice rather than the systematic speed determination done for use with the Zone System.
[10] If a roll-film camera accepts interchangeable backs, it is possible to use N+ and N development by designating different backs for different
development, and changing backs when the image so requires. Without interchangeable backs, different camera bodies can be designated for
different development, but this usually is practical only with small-format cameras.
[11] http:/ / www. luminous-landscape. com/ tutorials/ digital-blending. shtml
[12] Discussion on how histograms can be used to implement the Zone System in digital photography (http:// www. illustratedphotography.
com/ basic-photography/ zone-system-histograms)
[13] Feininger, Andreas, Light and Lighting in Photography, Prentice-Hall, 1976
References
Adams, Ansel. 1948. The Negative: Exposure and Development. Ansel Adams Basic Photography Series/Book 2.
Boston: New York Graphic Society. ISBN 0-8212-0717-2
Adams, Ansel. 1981. The Negative. The New Ansel Adams Basic Photography Series/Book 2. ed. Robert Baker.
Boston: New York Graphic Society. ISBN 0-8212-1131-5. Reprinted, Boston: Little, Brown, & Company, 1995.
ISBN 0-8212-2186-8. Page references are to the 1981 edition.
Adams, Ansel. 1985. Ansel Adams: An Autobiography. ed. Mary Street Alinder. Boston: Little, Brown, &
Company. ISBN 0-8212-1596-5
ANSI PH2-1979. American National Standard Method for Determining Speed of Photographic Negative
Materials (Monochrome, Continuous-Tone). New York: American National Standards Institute.
Davis, Phil. 1999. Beyond the Zone System. 4th ed. Boston: Focal Press. ISBN 0-240-80343-4
ISO 6:1993. PhotographyBlack-and-White Pictorial Still Camera Negative Film/Process Systems. International
Organization for Standardization (http:/ / www. iso. org).
Latour, Ira H. 1998. Ansel Adams, The Zone System and the California School of Fine Arts. History of
Photography, v22, n2, Summer 1998, pg 148. ISSN 0308-7298/98.
Picker, Fred. 1974. Zone VI Workshop: The Fine Print in Black & White Photography. Garden City, N.Y.:
Amphoto. ISBN 0-8174-0574-7
White, Minor, Richard Zakia, and Peter Lorenz. 1976. The New Zone System Manual. Dobbs Ferry, N.Y.: Morgan
& Morgan ISBN 0-87100-100-4
Further reading
Farzad, Bahman. The Confused Photographers Guide to Photographic Exposure and the Simplified Zone System.
4th ed. Birmingham, AL: Confused Photographers Guide Books, 2001. ISBN 0-9660817-1-4
Johnson, Chris. The Practical Zone System, Fourth Edition: For Film and Digital Photography. 4th ed. Boston:
Focal Press, 2007. ISBN 0-240-80756-1
Lav, Brian. Zone System: Step-by-Step Guide for Photographers. Buffalo, NY: Amherst Media, 2001. ISBN
1-58428-055-7
High-dynamic-range imaging
9
Example of HDR image including images that were used for its creation.
HDR photography of Leukbach, a river of Saarburg, Germany.
High-dynamic-range imaging (HDRI or
HDR) is a set of methods used in imaging
and photography to capture a greater
dynamic range between the lightest and
darkest areas of an image than current
standard digital imaging methods or
photographic methods. HDR images can
represent more accurately the range of
intensity levels found in real scenes, from
direct sunlight to faint starlight, and is often
captured by way of a plurality of differently
exposed pictures of the same subject
matter.
[1][][2]

[3]
HDR methods provide higher dynamic
range from the imaging process. Non-HDR
cameras take pictures at one exposure level
with a limited contrast range. This results in
the loss of detail in bright or dark areas of a
picture, depending on whether the camera
had a low or high exposure setting. HDR
compensates for this loss of detail by taking
multiple pictures at different exposure levels
and intelligently stitching them together to
produce a picture that is representative in
both dark and bright areas.
HDR is also commonly used to refer to
display of images derived from HDR
imaging in a way that exaggerates contrast
for artistic effect. The two main sources of
HDR images are computer renderings and
merging of multiple low-dynamic-range
(LDR)
[4]
or standard-dynamic-range
(SDR)
[5]
photographs. HDR images can also
be acquired using special image sensors,
like oversampled binary image sensor. Tone
mapping methods, which reduce overall contrast to facilitate display of HDR images on devices with lower dynamic
range, can be applied to produce images with preserved or exaggerated local contrast for artistic effect.
High-dynamic-range imaging
10
High-dynamic-range (HDR) image made out of three pictures. Taken in Tronador,
Argentina.
Photography
In photography, dynamic range is measured
in EV differences (known as stops) between
the brightest and darkest parts of the image
that show detail. An increase of one EV or
one stop is a doubling of the amount of
light.
Dynamic ranges of common devices
Device Stops Contrast
LCD display 9.5 700:1 (250:1 1750:1)
Negative film (Kodak VISION3)
13
[]
8192:1
Human eye
1014
[6]
1024:1 16384:1
DSLR camera (Pentax K-5 II)
14.1
[]
17560:1
High-dynamic-range photographs are generally achieved by capturing multiple standard photographs, often using
exposure bracketing, and then merging them into an HDR image. Digital photographs are often encoded in a
camera's raw image format, because 8 bit JPEG encoding doesn't offer enough values to allow fine transitions (and
introduces undesirable effects due to the lossy compression).
Any camera that allows manual exposure control can create HDR images. This includes film cameras, though the
images may need to be digitized so they can be processed with software HDR methods.
Some cameras have an auto exposure bracketing (AEB) feature with a far greater dynamic range than others, from
the 3 EV of the Canon EOS 40D, to the 18 EV of the Canon EOS-1D Mark II.
[7]
As the popularity of this imaging
method grows, several camera manufactures are now offering built-in HDR features. For example, the Pentax K-7
DSLR has an HDR mode that captures an HDR image and outputs (only) a tone mapped JPEG file.
[8]
The Canon
PowerShot G12, Canon PowerShot S95 and Canon PowerShot S100 offer similar features in a smaller format.
[9]
Even some smartphones now include HDR modes, and most platforms have apps that provide HDR picture
taking.
[10]
High-dynamic-range imaging
11
Editing
Of all imaging tasks, image editing demands the highest dynamic range. Editing operations need high precision to
avoid aliasing artifacts such as banding and jaggies. Photoshop users are familiar with the issues of low dynamic
range today. With 8 bit channels, if you brighten an image, information is lost irretrievably: darkening the image
after brightening does not restore the original appearance. Instead, all of the highlights appear flat and washed out.
One must work in a carefully planned work-flow to avoid this problem.
Scanning film
In contrast to digital photographs, color negatives and slides consist of multiple film layers that respond to light
differently. As a consequence, transparent originals (especially positive slides) feature a very high dynamic range.
[11]
Dynamic ranges of photographic material
Material Dynamic range (F stops) Object contrast
photograph print 5 1:32
color negative 8 1:256
positive slide 12 1:4096
When digitizing photographic material with an image scanner, the scanner must be able to capture the whole
dynamic range of the original, or details are lost. The manufacturer's declarations concerning the dynamic range of
flatbed and film scanners are often slightly inaccurate and exaggerated.
[citation needed]
Despite color negative having less dynamic range than slide, it actually captures considerably more dynamic range of
the scene than does slide film. This dynamic range is simply compressed considerably.
Representing HDR images on LDR displays
Camera characteristics
Camera characteristics must be considered when reconstructing high-dynamic-range imagesparticularly gamma
curves, sensor resolution, and noise.
[]
Camera calibration
Camera calibration can be divided into three aspects: geometric calibration, photometric calibration and spectral
calibration. For HDR reconstruction, the important aspects are photometric and spectral calibrations.
[]
Color reproduction
Due to the fact that it is human perception of color rather than color per se that is important in color reproduction,
light sensors and emitters try to render and manipulate a scene's light signal in such a way as to mimic human
perception of color. Based on the trichromatic nature of the human eye, the standard solution adopted by industry is
to use red, green, and blue filters, referred to as RGB, to sample the input light signal and to reproduce the signal
using light-based image emitters. This employs an additive color model, as opposed to the subtractive color model
used with printers, paintings etc.
Photographic color films usually have three layers of emulsion, each with a different spectral curve, sensitive to red,
green, and blue light, respectively. The RGB spectral response of the film is characterized by spectral sensitivity and
spectral dye density curves.
[12]
High-dynamic-range imaging
12
Contrast reduction
HDR images can easily be represented on common LDR media, such as computer monitors and photographic prints,
by simply reducing the contrast, just as all image editing software is capable of doing.
Clipping and compressing dynamic range
An example of a rendering of an HDRI tone
mapped image in a New York City nighttime
cityscape.
Scenes with high dynamic ranges are often represented on LDR
devices by cropping the dynamic range, cutting off the darkest and
brightest details, or alternatively with an S-shaped conversion curve
that compresses contrast progressively and more aggressively in the
highlights and shadows while leaving the middle portions of the
contrast range relatively unaffected.
Tone mapping
Tone mapping reduces the dynamic range, or contrast ratio, of the
entire image, while retaining localized contrast (between neighboring
pixels), tapping into research on how the human eye and visual cortex
perceive a scene, trying to represent the whole dynamic range while retaining realistic color and contrast.
Images with too much tone mapping processing have their range over-compressed, creating a surreal
low-dynamic-range rendering of a high-dynamic-range scene.
Comparison with traditional digital images
Information stored in high-dynamic-range images typically corresponds to the physical values of luminance or
radiance that can be observed in the real world. This is different from traditional digital images, which represent
colors that should appear on a monitor or a paper print. Therefore, HDR image formats are often called
scene-referred, in contrast to traditional digital images, which are device-referred or output-referred. Furthermore,
traditional images are usually encoded for the human visual system (maximizing the visual information stored in the
fixed number of bits), which is usually called gamma encoding or gamma correction. The values stored for HDR
images are often gamma compressed (power law) or logarithmically encoded, or floating-point linear values, since
fixed-point linear encodings are increasingly inefficient over higher dynamic ranges.
[][13][14]
HDR images often don't use fixed ranges per color channelother than for traditional imagesto represent many
more colors over a much wider dynamic range. For that purpose, they don't user integer values to represent the single
color channels (e.g.m, 0..255 in an 8 bit per pixel interval for red, green and blue) but instead use a floating point
representation. Common are 16-bit (half precision) or 32-bit floating point numbers to represent HDR pixels.
However, when the appropriate transfer function is used, HDR pixels for some applications can be represented with
as few as 1012bits for luminance and 8bits for chrominance without introducing any visible quantization
artifacts.
[][15]
High-dynamic-range imaging
13
History of HDR photography
1850
Photo by Gustave Le Gray
The idea of using several exposures to fix a too-extreme range of
luminance was pioneered as early as the 1850s by Gustave Le Gray to
render seascapes showing both the sky and the sea. Such rendering was
impossible at the time using standard methods, the luminosity range
being too extreme. Le Gray used one negative for the sky, and another
one with a longer exposure for the sea, and combined the two into one
picture in positive.
[16]
Mid-twentieth century
External images
Schweitzer at the Lamp
[17]
, by W. Eugene Smith
[18][19]
Mid-twentieth century, manual tone mapping was particularly done using dodging and burning selectively
increasing or decreasing the exposure of regions of the photograph to yield better tonality reproduction. This is
effective because the dynamic range of the negative is significantly higher than would be available on the finished
positive paper print when that is exposed via the negative in a uniform manner. An excellent example is the
photograph Schweitzer at the Lamp by W. Eugene Smith, from his 1954 photo essay A Man of Mercy on Dr. Albert
Schweitzer and his humanitarian work in French Equatorial Africa. The image took 5 days to reproduce the tonal
range of the scene, which ranges from a bright lamp (relative to the scene) to a dark shadow.
[19]
Ansel Adams elevated dodging and burning to an art form. Many of his famous prints were manipulated in the
darkroom with these two methods. Adams wrote a comprehensive book on producing prints called The Print, which
features dodging and burning prominently, in the context of his Zone System.
With the advent of color photography, tone mapping in the darkroom was no longer possible, due to the specific
timing needed during the developing process of color film. Photographers looked to film manufacturers to design
new film stocks with improved response over the years, or shot in black and white to use tone mapping methods.
High-dynamic-range imaging
14
Exposure/Density Characteristics of Wyckoff's Extended Exposure Response Film
Film capable of directly recording
high-dynamic-range images was developed
by Charles Wyckoff and EG&G "in the
course of a contract with the Department of
the Air Force".
[20]
This XR film had three
layers, an upper layer having an ASA speed
rating of 400, a middle layer with an
intermediate rating, and a lower layer with
an ASA rating of 0.004. The film was
processed in a manner similar to color films,
and each layer produced a different
color.
[21]
The dynamic range of this
extended range film has been estimated as
1:10
8
.
[22]
It has been used to photograph
nuclear explosions,
[23]
for astronomical
photography,
[24]
for spectrographic
research,
[25]
and for medical imaging.
[26]
Wyckoff's detailed pictures of nuclear explosions appeared on the cover of
Life magazine in the mid-1950s.
1980
The desirability of HDR has been recognized for decades, but its wider usage was, until quite recently, precluded by
the limitations imposed by the available computer processing power. Probably the first practical application of HDRI
was by the movie industry in late 1980s and, in 1985, Gregory Ward created the Radiance RGBE image formatthe
first HDR imaging file format.
The concept of neighborhood tone mapping was applied to video cameras by a group from the Technion in Israel led
by Prof. Y.Y.Zeevi who filed for a patent on this concept in 1988.
[]
In 1993 the first commercial medical camera was
introduced that performed real time capturing of multiple images with different exposures, and producing an HDR
video image, by the same group.
[]
Modern HDR imaging uses a completely different approach, based on making a high-dynamic-range luminance or
light map using only global image operations (across the entire image), and then tone mapping this result. Global
HDR was first introduced in 1993
[1]
resulting in a mathematical theory of differently exposed pictures of the same
subject matter that was published in 1995 by Steve Mann and Rosalind Picard.
[]
This method was developed to produce a high-dynamic-range image from a set of photographs taken with a range of
exposures. With the rising popularity of digital cameras and easy-to-use desktop software, the term HDR is now
popularly used to refer to this process. This composite method is different from (and may be of lesser or greater
quality than) the production of an image from one exposure of a sensor that has a native high dynamic range. Tone
mapping is also used to display HDR images on devices with a low native dynamic range, such as a computer
screen.
1996
The advent of consumer digital cameras produced a new demand for HDR imaging to improve the light response of
digital camera sensors, which had a much smaller dynamic range than film. Steve Mann developed and patented the
global-HDR method for producing digital images having extended dynamic range at the MIT Media Laboratory.
[]
Mann's method involved a two-step procedure: (1) generate one floating point image array by global-only image
operations (operations that affect all pixels identically, without regard to their local neighborhoods); and then (2)
convert this image array, using local neighborhood processing (tone-remapping, etc.), into an HDR image. The
image array generated by the first step of Mann's process is called a lightspace image, lightspace picture, or radiance
High-dynamic-range imaging
15
map. Another benefit of global-HDR imaging is that it provides access to the intermediate light or radiance map,
which has been used for computer vision, and other image processing operations.
[]
1997
This method of combining several differently exposed images to produce one HDR image was presented to the
public by Paul Debevec.
2005
Photoshop CS2 introduced the Merge to HDR function, 32 bit floating point image support for HDR images, and
HDR tone mapping for conversion of HDR images to LDR.
[]
Video
While custom high-dynamic-range digital video solutions had been developed for industrial manufacturing during
the 1980s, it was not until the early 2000s that several scholarly research efforts used consumer-grade sensors and
cameras.
[27]
A few companies such as RED
[28]
and Arri
[29]
have been developing digital sensors capable of a higher
dynamic range. RED EPIC-X can capture HDRx images with a user selectable 1-3 stops of additional highlight
latitude in the 'x' channel. The 'x' channel can be merged with the normal channel in post production software. With
the advent of low-cost consumer digital cameras, many amateurs began posting tone mapped HDR time-lapse videos
on the Internet, essentially a sequence of still photographs in quick succession. In 2010 the independent studio Soviet
Montage produced an example of HDR video from disparately exposed video streams using a beam splitter and
consumer grade HD video cameras.
[30]
Similar methods have been described in the academic literature in 2001
[31]
and 2007.
[32]
Modern movies have often been filmed with cameras featuring a higher dynamic range, and legacy movies can be
upgraded even if manual intervention would be needed for some frames (as this happened in the past with
black&white films upgrade to color). Also, special effects, especially those in which real and synthetic footage are
seamlessly mixed, require both HDR shooting and rendering. HDR video is also needed in all applications in which
capturing temporal aspects of changes in the scene demands high accuracy. This is especially important in
monitoring of some industrial processes such as welding, predictive driver assistance systems in automotive
industry, surveillance systems, to name just a few possible applications. HDR video can be also considered to speed
up the image acquisition in all applications, in which a large number of static HDR images are needed, as for
example in image-based methods in computer graphics. Finally, with the spread of TV sets with enhanced dynamic
range, broadcasting HDR video may become important, but may take a long time to occur due to standardization
issues. For this particular application, enhancing current low-dynamic range rendering (LDR) video signal to HDR
by intelligent TV sets seems to be a more viable near-term solution.
[33]
Examples
These are examples of four standard dynamic range images that are combined to produce two resulting tone mapped
images.
Raw material
4 stops 2 stops +2 stops +4 stops
High-dynamic-range imaging
16
Results after processing
Simple contrast reduction Local tone mapping
These are examples of two standard dynamic range images that are combined to produce a resulting tone mapped
image.
Raw material
+2 stops -2 stops
Result after processing
High-dynamic-range imaging
17
Final Tone Mapped Image
Gallery
High-dynamic-range imaging
18
References
[1] "Compositing Multiple Pictures of the Same Scene", by Steve Mann, in IS&T's 46th Annual Conference, Cambridge, Massachusetts, May
914, 1993
[16] J. Paul Getty Museum. Gustave Le Gray, Photographer. July 9 September 29, 2002. (http:/ / www. getty. edu/ art/ exhibitions/ le_gray)
Retrieved September 14, 2008.
[17] http:/ / www. cybergrain. com/ tech/ hdr/ images1/ eugene_smith. jpg
[18] The Future of Digital Imaging High Dynamic Range Photography (http:/ / www. cybergrain. com/ tech/ hdr/ ), Jon Meyer, Feb 2004
[19] 4.209: The Art and Science of Depiction (http:/ / people. csail. mit. edu/ fredo/ ArtAndScienceOfDepiction/ ), Frdo Durand and Julie
Dorsey, Limitations of the Medium: Compensation and accentuation The Contrast is Limited (http:/ / people. csail. mit. edu/ fredo/
ArtAndScienceOfDepiction/ 12_Contrast/ contrast. html), lecture of Monday, April 9. 2001, slide 5759 (http:/ / people. csail. mit. edu/ fredo/
ArtAndScienceOfDepiction/ 12_Contrast/ contrast6. pdf); image on slide 57, depiction of dodging and burning on slide 58
[21] C. W. Wyckoff. Experimental extended exposure response film. Society of Photographic Instrumentation Engineers Newsletter, JuneJuly,
1962, pp. 16-20.
[22] Michael Goesele, et al., "High Dynamic Range Techniques in Graphics: from Acquisition to Display", Eurographics 2005 Tutorial T7 (http:/
/ www. mpi-inf.mpg. de/ resources/ tmo/ EG05_HDRTutorial_Complete. pdf)
[23] The Militarily Critical Technologies List (http:/ / www. fas. org/ irp/ threat/ mctl98-2/ p2sec05. pdf) (1998), pages II-5-100 and II-5-107.
[24] Andrew T. Young and Harold Boeschenstein, Jr., Isotherms in the region of Proclus at a phase angle of 9.8 degrees, Scientific Report No. 5,
Harvard, College Observatory: Cambridge, Massachusetts, 1964.
[28] [28] UNIQ-nowiki-0-e3ee77f1e1218d46-QINU
[29] [29] UNIQ-nowiki-1-e3ee77f1e1218d46-QINU
Contre-jour
Contre-jour photo taken directly against the setting sun causing loss of subject detail and colour, and emphasis of
shapes and lines. Medium: Colour digital image.
Contre-jour
19
Contre-jour emphasizes the outline of the man and the tunnel entrance. The ground reflections show the position of
the man. Medium: Digital scan from B&W paper print.
Contre-jour, French for 'against daylight', refers to photographs taken when the camera is pointing directly toward a
source of light. An alternative term is backlighting.
[1][2]
Contre-jour produces backlighting of the subject. This effect usually hides details, causes a stronger contrast between
light and dark, creates silhouettes and emphasizes lines and shapes. The sun, or other light source, is often seen as
either a bright spot or as a strong glare behind the subject.
[2]
Fill light may be used to illuminate the side of the
subject facing toward the camera. Silhouetting occurs when there is a lighting ratio of 16:1 or more; at lower ratios
such as 8:1 the result is instead called low-key lighting.
[citation needed]
References
[1] "contre-jour" (http:/ / www. thefreedictionary. com/ contre-jour). The Free Dictionary. Retrieved 2011-04-16.
[2] Freeman, Michael (2007) The Complete Guide to Light & Lighting in Digital Photography. ILEX, London: Lark Books. pp. 74&75. ISBN
978-1-57990-885-0.
Night photography
20
Night photography
"The Night Sky" Photographed facing north at
6,600 feet (2,000m) in the Mount Hood National
Forest
The Skyline of Singapore as viewed at night
Night photography refers to photographs taken outdoors
between dusk and dawn. Night photographers generally have
a choice between using artificial light and using a long
exposure, exposing the scene for seconds, minutes, and even
hours in order to give the film or digital sensor enough time to
capture a usable image. With the progress of high-speed films,
higher-sensitivity digital image sensors, wide-aperture lenses,
and the ever-greater power of urban lights, night photography
is increasingly possible using available light.
History
View of Al Ain from top of Jebel Hafeet
In the early 1900s, a few notable photographers, Alfred Stieglitz and
William Fraser, began working at night. The first photographers known
to have produced large bodies of work at night were Brassai and Bill
Brandt. In 1932, Brassai published Paris de Nuit, a book of
black-and-white photographs of the streets of Paris at night. During
World War II, British photographer Brandt took advantage of the
black-out conditions to photograph the streets of London by moonlight.
Photography at night found several new practitioners in the 1970s,
beginning with the black and white photographs that Richard Misrach made of desert flora (1975-77). Joel
Meyerowitz made luminous large format color studies of Cape Cod at nightfall which were published in his
influential book, Cape Light (1979). .Jan Stallers twilight color
Night photography
21
Early night photograph of the Luna Park, Coney
Island, from the Detroit Publishing Co.
collection, 1905.
Chay kenar Boulevard in Tabriz
photographs (1977-84) of abandoned and derelict parts of
New York City captured uncanny visons of the urban
landscape lit by the glare of sodium vapor street lights.
By the 1990s, British-born photographer Michael Kenna had
established himself as the most commercially successful night
photographer. His black-and-white landscapes were most
often set between dusk and dawn in locations that included
San Francisco, Japan, France, and England. Some of his most
memorable projects depict the Ford Motor Company's Rouge
River plant, the Ratcliffe-on-Soar Power Station in the East
Midlands in England, and many of the Nazi concentration
camps scattered across Germany, France, Belgium, Poland
and Austria.
During the beginning of the 21st century, the popularity of
digital cameras made it much easier for beginning
photographers to understand the complexities of
photographing at night. Today, there are hundreds of websites
dedicated to night photography.
Subjects
Celestial bodies (See astrophotography.)
The moon, stars, planets, etc.
City skylines
Factories and industrial areas, particularly those that are brightly lit and are emitting smoke or vapour
Fireworks
Nightlife or rock concerts
Caves (See cave photography)
Streets with or without cars
Abandoned buildings and artificial structures that are lit only by moonlight
Bodies of water that are reflecting moonlight or city lights
Lakes, rivers, canals, etc.
Thunderstorms
Amusement rides
Night photography
22
Technique and equipment
The length of a night exposure causes the lights on moving cars to
streak across the image
The following techniques and equipment are generally
used in night photography.
A tripod is usually necessary due to the long
exposure times. Alternatively, the camera may be
placed on a steady, flat object e.g. a table or chair,
low wall, window sill, etc.
A shutter release cable or self timer is almost always
used to prevent camera shake when the shutter is
released.
Manual focus, since autofocus systems usually
operate poorly in low light conditions. Newer digital
cameras incorporate a Live View mode which often
allows very accurate manual focusing.
A stopwatch or remote timer, to time very long
exposures where the camera's bulb setting is used.
Long exposures and multiple flashes
The long exposure multiple flash technique is a method of night or low light photography which use a mobile flash
unit to expose various parts of a building or interior using a long exposure.
This technique is often combined with using coloured gels in front of the flash unit to provide different colours in
order to illuminate the subject in different ways. It is also common to flash the unit several times during the exposure
while swapping the colours of the gels around to mix colours on the final photo. This requires some skill and a lot of
imagination since it is not possible to see how the effects will turn out until the exposure is complete. By using this
technique, the photographer can illuminate specific parts of the subject in different colours creating shadows in ways
which would not normally be possible.
Painting with light
When the correct equipment is used such as a tripod and shutter release cable, the photographer can use long
exposures to photograph images of light. For example, when photographing a subject try switching the exposure to
manual and selecting the bulb setting on the camera. Once this is achieved trip the shutter and photograph your
subject moving a flashlight or any small light in various patterns. Experiment with this outcome to produce artistic
results. Multiple attempts are usually needed to produce a desired result.
Night photography
23
High ISO
With advance imaging sensor (CMOS-BSI) and sophisticated software (processor) we can make low-light
photography with High ISO without tripod or long exposure and even can use cameras with small sensor such as:
Sony Cyber-shot DSC-RX100, Nikon 1 J2 and Canon PowerShot G1X which can give good images up to ISO
400.
[1]
Examples
An exposure blended night image
of the Sydney Opera House
San Francisco Oakland
Bay Bridge from Treasure
Island (California) taken by
Mikl Barton.
Rainbow Bridge viewed
from Odaiba
The Garden of Five
Senses, Delhi
Amusement rides Four image panorama of
Washington Park, 30 second
exposures each.
The World
Trade Center
in New York
City.
The Golden Gate Bridge at
night.
Taipei 101
at night,
fully lit.
The Space Shuttle
Columbia launches
for its mission to the
Hubble Space
Telescope.
Toronto (30-second exposure). An exposure blended image
consisting of 30, 2.5 and 10
second exposures
University of New South Wales,
Sydney (digital, night mode)
Copenhagen at night
Night photography
24
Published night photographers
This section includes significant night photographers who have published books dedicated to night photography, and
some of their selected works.
Brassai
Paris de Nuit, Arts et metiers graphiques, 1932.
Harold Burdekin and John Morrison
London Night, Collins, 1934.
Jeff Brouws
Inside the Live Reptile Tent, Chronicle Books, 2001. ISBN 0-8118-2824-7
Alan Delaney
London After Dark, Phaidon Press, 1993. ISBN 0-7148-2870-X
Neil Folberg
Celestial Nights, Aperture Foundation, 2001. ISBN 0-89381-945-X
Karekin Goekjian
Light After Dark, Lucinne, Inc. ASIN B0006QOVCG
Todd Hido
Outskirts, Nazraeli Press, 2002. ISBN 1-59005-028-2
Peter Hujar
Night, Matthew Marks Gallery/Fraenkel Gallery, 2005. ISBN 1-880146-45-2
Rolfe Horn
28 Photographs, Nazraeli Press. ISBN 1-59005-122-X
Lance Keimig
Night Photography, Finding Your Way In The Dark, Focal Press, 2010. ISBN 978-0-240-81258-8
Brian Kelly
Grand Rapids: Night After Night, Glass Eye, 2001. ISBN 0-9701293-0-0
Michael Kenna
The Rouge, RAM Publications, 1995. ISBN 0-9630785-1-8
Night Work, Nazraeli Press, 2000. ISBN 3-923922-83-3
William Lesch
Expansions, RAM Publications, 1992. ISBN 4-8457-0667-9
O. Winston Link
The Last Steam Railroad in America, Harry Abrams, 1995. ISBN 0-8109-3575-9
Tom Paiva
Industrial Night, The Image Room, 2002. ISBN 0-9716928-0-7
Troy Paiva
Night Vision: The Art of Urban Exploration, Chronicle Books, 2008. ISBN 0-8118-6338-7
Lost America: The Abandoned Roadside West, MBI Publishing, 2003. ISBN 0-7603-1490-X
Bill Schwab
Bill Schwab: Photographs, North Light Press, 1999. ISBN 0-9765193-0-5
Gathering Calm, North Light Press, 2005. ISBN 0-9765193-2-1
Jan Staller
[2]
Frontier New York, Hudson Hills Press, 1988. ISBN 1-55595-009-4,http:/ / www. janstaller. net/ books/
frontier-new-york/
Night photography
25
Zabrina Tipton
At Night in San Francisco, San Francisco Guild of the Arts Press, 2006. ISBN 1-4243-1882-3
Giovanna Tucker
"How to Night Photography", 2011. ISBN 978-1-4657-4423-4
Volkmar Wentzel
Washington by Night, Fulcrum Publishing, 1998. ISBN 978-1-55591-410-3
References
[2] http:/ / www. janstaller.net/
External links
Comprehensive tutorials and articles about how to do night photography (http:/ / thenocturnes. com/ resources.
html) by The Nocturnes
Photoblog Wiki (http:/ / www. photoblog. com/ wiki/ Night) Photoblog.com wiki article on night photography
Short notes discussing the meaning and technique of night photography (http:/ / www. nightfolio. co. uk/
night_photography_notes_index. html) by David Baldwin
Photography for night owls (http:/ / pages. cthome. net/ rwinkler/ nightphotog. htm) How to take photos in the
style of Brassai
Night Photography Guide (http:/ / adcuz. co. uk/ how-to-articles/ how-to-create-a-long-exposure-photo/ ) Tutorial
by Adam Currie
Multiple exposure
A multiple exposure composite
image of a lunar eclipse taken over
Hayward, California in 2004.
In photography and cinematography, a multiple exposure is the superimposition
of two or more exposures to create a single image, and double exposure has a
corresponding meaning in respect of two images. The exposure values may or
may not be identical to each other.
Overview
Ordinarily, cameras have a sensitivity to light that is a function of time. For
example, a one second exposure is an exposure in which the camera image is
equally responsive to light over the exposure time of one second. The criterion
for determining that something is a double exposure is that the sensitivity goes
up and then back down. The simplest example of a multiple exposure is a double
exposure without flash, i.e. two partial exposures are made and then combined
into one complete exposure. Some single exposures, such as "flash and blur" use
a combination of electronic flash and ambient exposure. This effect can be
approximated by a Dirac delta measure (flash) and a constant finite rectangular
window, in combination. For example, a sensitivity window comprising a Dirac
comb combined with a rectangular pulse, is considered a multiple exposure, even though the sensitivity never goes to
zero during the exposure.
Multiple exposure
26
Double exposure
Analogue
Composer Karlheinz Stockhausen, double
exposure made using a film camera, 1980
Double exposure made using a film camera
In photography and cinematography, multiple exposure is a technique
in which the camera shutter is opened more than once to expose the
film multiple times, usually to different images. The resulting image
contains the subsequent image/s superimposed over the original. The
technique is sometimes used as an artistic visual effect and can be used
to create ghostly images or to add people and objects to a scene that
were not originally there. It is frequently used in photographic hoaxes.
It is considered easiest to have a manual winding camera for double
exposures. On automatic winding cameras, as soon as a picture is taken
the film is typically wound to the next frame. Some more advanced
automatic winding cameras have the option for multiple exposures but
it must be set before making each exposure. Manual winding cameras
with a multiple exposure feature can be set to double-expose after
making the first exposure.
Since shooting multiple exposures will expose the same frame multiple
times, negative exposure compensation must first be set to avoid
overexposure. For example, to expose the frame twice with correct
exposure, a 1 EV compensation have to be done, and 2 EV for
exposing four times. This may not be necessary when photographing a
lit subject in two (or more) different positions against a perfectly dark background, as the background area will be
essentially unexposed.
Medium to low light is ideal for double exposures. A tripod may not be necessary if combining different scenes in
one shot. In some conditions, for example, recording the whole progress of a lunar eclipse in multiple exposures, a
stable tripod is essential.
More than two exposures can be combined, with care not to overexpose the film.
Digital
Multiple exposure of one person using Adobe
Photoshop
Digital technology enables images to be superimposed over each other
by using a software photo editor, such as Adobe Photoshop or GIMP.
These enable the opacity of the images to be altered and for an image
to be overlayed over another. They also can set the layers to multiply
mode, which 'adds' the colors together rather than making the colors of
either image pale and translucent. Many digital SLR cameras allow
multiple exposures to be made on the same image within the camera
without the need for any external software.
Multiple exposure
27
Long exposures
A four hour long exposure, using multiple shorter
exposures
With traditional film cameras, a long exposure is a single exposure,
whereas with electronic cameras a long exposure can be obtained by
integrating together many exposures. This averaging also permits there
to be a time-windowing function, such as a Gaussian, that weights time
periods near the center of the exposure time more strongly. Another
possibility for synthesizing long exposure from multiple-exposure is to
use an exponential decay in which the current frame has the strongest
weight, and previous frames are faded out with a sliding exponential
window.
Scanning film with multiple exposure
Multiple exposure technique can also be used when scanning transparencies like slides, film or negatives using a
film scanner for increasing dynamic range. With multiple exposure the original gets scanned several times with
different exposure intensities. An overexposed scan lights the shadow areas of the image and enables the scanner to
capture more image information here. In contrary an underexposed scans allows to gather more details in the light
areas. Afterwards the data can be calculated into a single HDR image with increased dynamic range.
Among the scanning software solutions which implement multiple exposure are VueScan and SilverFast.
References
Camera obscura
28
Camera obscura
A drawing of a camera obscura
Camerae obscurae for Daguerreotype called
"Grand Photographe" produced by Charles
Chevalier (Muse des Arts et Mtiers)
A projection of an image of the New Royal
Palace in Prague Castle created with a camera
obscura
The camera obscura (Latin; camera for "vaulted chamber/room",
obscura for "dark", together "darkened chamber/room"; plural: camera
obscuras or camerae obscurae) is an optical device that projects an
image of its surroundings on a screen. It is used in drawing and for
entertainment, and was one of the inventions that led to photography
and the camera. The device consists of a box or room with a hole in
one side. Light from an external scene passes through the hole and
strikes a surface inside, where it is reproduced, upside-down, but with
color and perspective preserved. The image can be projected onto
paper, and can then be traced to produce a highly accurate
representation.The largest camera obscura in the world is on
Constitution Hill in Aberystwyth, Wales.
[1]
Using mirrors, as in the 18th-century overhead version (illustrated in
the History section below), it is possible to project a right-side-up
image. Another more portable type is a box with an angled mirror
projecting onto tracing paper placed on the glass top, the image being
upright as viewed from the back.
As the pinhole is made smaller, the image gets sharper, but the
projected image becomes dimmer. With too small a pinhole, however,
the sharpness worsens, due to diffraction. Some practical camera
obscuras use a lens rather than a pinhole because it allows a larger
aperture, giving a usable brightness while maintaining focus. (See
pinhole camera for construction information.)
Camera obscura
29
History
Camera obscura in Encyclopdie, ou dictionnaire
raisonn des sciences, des arts et des mtiers
The camera obscura has been known to scholars since the time of Mozi
and Aristotle.
[2]
The first surviving mention of the principles behind
the pinhole camera or camera obscura belongs to Mozi (Mo-Ti) (470 to
390 BCE), a Chinese philosopher and the founder of Mohism.
[3]
Mozi
referred to this device as a "collecting plate" or "locked treasure
room."
[4]
The Greek philosopher Aristotle (384 to 322 BCE) understood the
optical principle of the pinhole camera.
[5]
He viewed the crescent
shape of a partially eclipsed sun projected on the ground through the
holes in a sieve and through the gaps between the leaves of a plane
tree. In the 4th century BCE, Aristotle noted that "sunlight travelling
through small openings between the leaves of a tree, the holes of a
sieve, the openings wickerwork, and even interlaced fingers will create
circular patches of light on the ground." Euclid's Optics (ca 300 BCE)
presupposed the camera obscura as a demonstration that light travels in
straight lines.
[6]
In the 4th century, Greek scholar Theon of Alexandria
observed that "candlelight passing through a pinhole will create an
illuminated spot on a screen that is directly in line with the aperture
and the center of the candle."
In the 6th century, Byzantine mathematician and architect Anthemius
of Tralles (most famous for designing the Hagia Sophia), used a type of camera obscura in his experiments.
[]
In the 9th century, Al-Kindi (Alkindus) demonstrated that "light from the right side of the flame will pass through
the aperture and end up on the left side of the screen, while light from the left side of the flame will pass through the
aperture and end up on the right side of the screen."
Alhazen also gave the first clear description
[7]
and early analysis
[8]
of the camera obscura and pinhole camera. While
Aristotle, Theon of Alexandria, Al-Kindi (Alkindus) and Chinese philosopher Mozi had earlier described the effects
of a single light passing through a pinhole, none of them suggested that what is being projected onto the screen is an
image of everything on the other side of the aperture. Alhazen was the first to demonstrate this with his lamp
experiment where several different light sources are arranged across a large area. He was thus the first to
successfully project an entire image from outdoors onto a screen indoors with the camera obscura.
The Song Dynasty Chinese scientist Shen Kuo (10311095) experimented with a camera obscura, and was the first
to apply geometrical and quantitative attributes to it in his book of 1088 AD, the Dream Pool
Essays.
[9]
Wikipedia:Verifiability However, Shen Kuo alluded to the fact that the Miscellaneous Morsels from
Youyang written in about 840 AD by Duan Chengshi (d. 863) during the Tang Dynasty (618907) mentioned
inverting the image of a Chinese pagoda tower beside a seashore.
[9]
In fact, Shen makes no assertion that he was the
first to experiment with such a device.
[9]
Shen wrote of Cheng's book: "[Miscellaneous Morsels from Youyang] said
that the image of the pagoda is inverted because it is beside the sea, and that the sea has that effect. This is nonsense.
It is a normal principle that the image is inverted after passing through the small hole."
[9]
In 13th-century England, Roger Bacon described the use of a camera obscura for the safe observation of solar
eclipses.
[10]
Its potential as a drawing aid may have been familiar to artists by as early as the 15th century; Leonardo
da Vinci (14521519 AD) described the camera obscura in Codex Atlanticus. Johann Zahn's "Oculus Artificialis
Teledioptricus Sive Telescopium, which was published in 1685, contains many descriptions and diagrams,
illustrations and sketches of both the camera obscura and of the magic lantern.
Camera obscura
30
Giambattista della Porta is said to have perfected camera obscura. He described it as having a convex lens in later
editions of his Magia Naturalis (1558-1589), the popularity of which helped spread knowledge of it. He compared
the shape of the human eye to the lens in his camera obscura, and provided an easily understandable example of how
light could bring images into the eye. One chapter in the Conte Algarotti's Saggio sopra Pittura (1764) is dedicated
to the use of a camera ottica ("optic chamber") in painting.
[11]
Camera obscura, from a manuscript of military designs. 17th century,
possibly Italian.
The 17th century Dutch Masters, such as Johannes
Vermeer, were known for their magnificent attention to
detail. It has been widely speculated that they made use
of such a camera, but the extent of their use by artists at
this period remains a matter of considerable
controversy, recently revived by the HockneyFalco
thesis.
The term "camera obscura" itself was first used by the
German astronomer Johannes Kepler in 1604.
[12]
The
English physician and author Sir Thomas Browne
speculated upon the interrelated workings of optics and
the camera obscura in his 1658 discourse The Garden
of Cyrus thus:
For at the eye the Pyramidal rayes from the object, receive a decussation, and so strike a second base upon the
Retina or hinder coat, the proper organ of Vision; wherein the pictures from objects are represented,
answerable to the paper, or wall in the dark chamber; after the decussation of the rayes at the hole of the
hornycoat, and their refraction upon the Christalline humour, answering the foramen of the window, and the
convex or burning-glasses, which refract the rayes that enter it.
Four drawings by Canaletto, representing Campo
San Giovanni e Paolo in Venice, obtained with a
camera obscura (Venice, Gallerie
dell'Accademia)
Early models were large; comprising either a whole darkened room or
a tent (as employed by Johannes Kepler). By the 18th century,
following developments by Robert Boyle and Robert Hooke, more
easily portable models became available. These were extensively used
by amateur artists while on their travels, but they were also employed
by professionals, including Paul Sandby, Canaletto and Joshua
Reynolds, whose camera (disguised as a book) is now in the Science
Museum (London). Such cameras were later adapted by Joseph
Nicephore Niepce, Louis Daguerre and William Fox Talbot for
creating the first photographs.
Camera obscura
31
Gallery
A freestanding room-sized
camera obscura at the University
of North Carolina at Chapel Hill.
One of the pinholes can be seen
in the panel to the left of the
door.
A freestanding room-sized
camera obscura in the shape of
a camera located in San
Francisco at the Cliff House in
Ocean Beach
Image of the South Downs
of Sussex as seen in the
camera obscura of
Foredown Tower,
Portslade, England
A camera obscura created
by Mark Ellis is built in the
style of an Adirondack
mountain cabin, and sits by
the shore of Lake Flower
in the village of Saranac
Lake, NY.
19th-century artist using a
camera obscura to outline
his subject
Image of a modern day camera
obscura
Usage of a modern day camera
obscura used indoors
In popular culture
In the Mad Men (season 3) episode "Seven Twenty Three", Don Draper and Carlton Hanson help their children Sally
Draper and Ernie Hanson's teacher, Suzanne Farrell, cut holes in cardboard boxes to create cameras obscurae, with
which the kids and Miss Farrell watch the total solar eclipse of July 20, 1963; additionally, Betty Draper and Henry
Francis encounter a couple in downtown Ossining using a similar device. Miss Farrell explains to the children and
their fathers how it works and cautions against looking directly into the sun (which Betty Draper does, and as a result
feels faint, afterward).
[13]
Notes
[1] http:/ / www. cardiganshirecoastandcountry. com/ cliff-railway-camera-obscura-aberystwyth. php Cliff Railway and Camera Obscura,
Aberystwyth
[2] Jan Campbell (2005). " Film and cinema spectatorship: melodrama and mimesis (http:/ / books. google. com/ books?id=lOEqvkmSxhsC&
pg=PA114& dq& hl=en#v=onepage& q=& f=false)". Polity. p.114. ISBN 0-7456-2930-X
[3] Needham, Joseph. (1986). Science and Civilization in China: Volume 4, Physics and Physical Technology, Part 1, Physics. Taipei: Caves
Books Ltd. Page 82.
[4] Ouellette, Jennifer. (2005). Black Bodies and Quantum Cats: Tales from the Annals of Physics. London: Penguin Books Ltd. Page 52.
[5] Aristotle, Problems, Book XV
[6] The Camera Obscura : Aristotle to Zahn (http:/ / inventors. about. com/ gi/ dynamic/ offsite. htm?site=http:/ / web. archive. org/ web/
20080420165232/ http:/ / www. acmi. net.au/ AIC/ CAMERA_OBSCURA. html)
[7] [7] :
[8] [8] :
[9] [9] Needham, Volume 4, Part 1, 98.
[10] BBC - The Camera Obscura (http:/ / www. bbc. co. uk/ dna/ h2g2/ A2875430)
[12] History of Photography and the Camera - Part 1: The first photographs (http:/ / inventors. about. com/ library/ inventors/ blphotography.
htm)
Camera obscura
32
References
Hill, Donald R. (1993), "Islamic Science and Engineering", Edinburgh University Press, page 70.
Lindberg, D.C. (1976), "Theories of Vision from Al Kindi to Kepler", The University of Chicago Press, Chicago
and London.
Nazeef, Mustapha (1940), "Ibn Al-Haitham As a Naturalist Scientist", (Arabic), published proceedings of the
Memorial Gathering of Al-Hacan Ibn Al-Haitham, 21 December 1939, Egypt Printing.
Needham, Joseph (1986). Science and Civilization in China: Volume 4, Physics and Physical Technology, Part 1,
Physics. Taipei: Caves Books Ltd.
Omar, S.B. (1977). "Ibn al-Haitham's Optics", Bibliotheca Islamica, Chicago.
Wade, Nicholas J.; Finger, Stanley (2001), "The eye as an optical instrument: from camera obscura to Helmholtz's
perspective", Perception 30 (10): 11571177, doi: 10.1068/p3210 (http:/ / dx. doi. org/ 10. 1068/ p3210), PMID
11721819 (http:/ / www. ncbi. nlm. nih. gov/ pubmed/ 11721819)
External links
Timeline - The Camera Obscura in History The Camera Obscura Journal. Sfumato Press One error on page.
Nicphore Nipce did not use a pinhole camera in 1827 - he used a camera obscura with a lens. (http:// www.
obscurajournal. com/ history. php)
An Appreciation of the Camera Obscura (http:/ / brightbytes. com/ cosite/ cohome. html)
The Camera Obscura in San Francisco (http:/ / www. giantcamera. com/ ) the Giant Camera of San Francisco at
Ocean Beach, added to the National Register of Historic Places in 2001
Camera Obscura and World of Illusions (http:/ / www. camera-obscura. co. uk/ ), Edinburgh
Dumfries Museum & Camera Obscura, Dumfries, Scotland (http:/ / www. dumfriesmuseum. demon. co. uk/
dumfmuse. html)
Vermeer and the Camera Obscura (http:/ / www. bbc. co. uk/ history/ british/ empire_seapower/
vermeer_camera_01. shtml) by Philip Steadman
Paleo-camera (http:/ / www. paleo-camera. com/ ) the camera obscura and the origins of art
List of all known Camera Obscura (http:/ / www. foredown. virtualmuseum. info/ camera_obscuras/ default. asp)
Willett & Patteson (http:/ / www. amazingcameraobscura. co. uk) Camera obscura hire and creation
Camera Obscura and Outlook Tower, Edinburgh, Scotland (http:/ / www. scottish-places. info/ features/
featurefirst1049. html)
Cameraobscuras.com (http:/ / www. cameraobscuras. com) George T Keene builds custom camera obscuras like
the Griffith Observatory CO in Los Angeles.
Camera obscura in Trondheim, Norway (http:/ / www. ntnu. no/ 1-2-tre/ 06), built by students of architecture and
engineering from Norwegian University of Science and Technology (NTNU)
Pinhole camera
34
Holes in the leaf canopy project images of a solar
eclipse on the ground.
A home-made pinhole camera (on the left),
wrapped in black plastic to prevent light leaks,
and related developing supplies.
A common use of the pinhole camera is to capture the movement of the
sun over a long period of time. This type of photography is called
Solargraphy.
The image may be projected onto a translucent screen for real-time
viewing (popular for observing solar eclipses; see also camera
obscura), or can expose photographic film or a charge coupled device
(CCD). Pinhole cameras with CCDs are often used for
surveillance
[citation needed]
because they are difficult to detect.
Pinhole devices provide safety for the eyes when viewing solar
eclipses because the event is observed indirectly, the diminished
intensity of the pinhole image being harmless compared with the full
glare of the Sun itself.
[citation needed]
World Pinhole Day is held on the last Sunday of April.
[1]
Invention of pinhole camera
In the 10th century, Persian scientist Ibn al-Haytham (Alhazen) wrote
about naturally-occurring rudimentary pinhole cameras. For example,
light may travel through the slits of wicker baskets or the crossing of
tree leaves.
[2]
(The circular dapples on a forest floor, actually pinhole
images of the sun, can be seen to have a bite taken out of them during
partial solar eclipses opposite to the position of the moon's actual
occultation of the sun because of the inverting effect of pinhole lenses.)
Alhazen published this idea in the Book of Optics in 1021 AD. He improved on the camera after realizing that the
smaller the pinhole, the sharper the image (though the less light). He provides the first clear description for
construction of a camera obscura (Lat. dark chamber).
In the 5th century BC, the Mohist philosopher Mozi ( ) in ancient China mentioned the effect of an inverted
image forming through a pinhole.
[3]
The image of an inverted Chinese pagoda is mentioned in Duan Chengshi's (d.
863) book Miscellaneous Morsels from Youyang written during the Tang Dynasty (618907).
[4]
Along with
experimenting with the pinhole camera and the burning mirror of the ancient Mohists, the Song Dynasty (9601279
CE) Chinese scientist Shen Kuo (10311095) experimented with camera obscura and was the first to establish
geometrical and quantitative attributes for it.
[4]
Ancient pinhole camera effect caused by
balistrarias in the Castelgrande in Bellinzona
In the 13th century, Robert Grosseteste and Roger Bacon commented
on the pinhole camera.
[5]
Between 1000 and 1600, men such as Ibn
al-Haytham, Gemma Frisius, and Giambattista della Porta wrote on the
pinhole camera, explaining why the images are upside down.
Around 1600, Giambattista della Porta added a lens to the pinhole
camera.
[6][7]
It was not until 1850 that a Scottish scientist by the name
of Sir David Brewster actually took the first photograph with a pinhole
camera. Up until recently it was believed that Brewster himself coined
the term "Pinhole" in "The Stereoscope"
[citation needed]
. The earliest
reference to the term "Pinhole" has been traced back to almost a
century before Brewster to James Ferguson's Lectures on select
Subjects.
[8][9]
Sir William Crookes and William de Wiveleslie Abney were other early photographers to try the
pinhole technique.
[10]
Pinhole camera
35
Selection of pinhole size
An example of a 20 minute exposure taken with a
pinhole camera
A photograph taken with a pinhole camera using
an exposure time of 2s
Within limits, a smaller pinhole (with a thinner surface that the hole
goes through) will result in sharper image resolution because the
projected circle of confusion at the image plane is practically the same
size as the pinhole. An extremely small hole, however, can produce
significant diffraction effects and a less clear image due to the wave
properties of light.
[]
Additionally, vignetting occurs as the diameter of
the hole approaches the thickness of the material in which it is
punched, because the sides of the hole obstruct the light entering at
anything other than 90 degrees.
The best pinhole is perfectly round (since irregularities cause
higher-order diffraction effects), and in an extremely thin piece of
material. Industrially produced pinholes benefit from laser etching, but
a hobbyist can still produce pinholes of sufficiently high quality for
photographic work.
Some examples of photographs taken using a
pinhole camera.
One method is to start with a sheet of brass shim or metal reclaimed
from an aluminium drinks can or tin foil/aluminum foil, use fine sand
paper to reduce the thickness of the centre of the material to the
minimum, before carefully creating a pinhole with a suitably sized
needle.
A method of calculating the optimal pinhole diameter was first
attempted by Jozef Petzval. The crispest image is obtained using a
pinhole size determined by the formula
[11]
where d is pinhole diameter, f is focal length (distance from pinhole to
image plane) and is the wavelength of light.
For standard black-and-white film, a wavelength of light corresponding
to yellow-green (550 nm) should yield optimum results. For a
pinhole-to-film distance of 1 inch (25mm), this works out to a pinhole
0.17mm in diameter.
[12]
For 5cm, the appropriate diameter is
0.23mm.
[13]
The depth of field is basically infinite, but this does not mean that no optical blurring occurs. The infinite depth of
field means that image blur depends not on object distance, but on other factors, such as the distance from the
aperture to the film plane, the aperture size, and the wavelength(s) of the light source.
Pinhole camera
36
Pinhole camera construction
Pinhole cameras can be handmade by the photographer for a particular purpose. In its simplest form, the
photographic pinhole camera can consist of a light-tight box with a pinhole in one end, and a piece of film or
photographic paper wedged or taped into the other end. A flap of cardboard with a tape hinge can be used as a
shutter. The pinhole may be punched or drilled using a sewing needle or small diameter bit through a piece of tinfoil
or thin aluminum or brass sheet. This piece is then taped to the inside of the light tight box behind a hole cut through
the box. A cylindrical oatmeal container may be made into a pinhole camera.
Pinhole cameras can be constructed with a sliding film holder or back so the distance between the film and the
pinhole can be adjusted. This allows the angle of view of the camera to be changed and also the effective f-stop ratio
of the camera. Moving the film closer to the pinhole will result in a wide angle field of view and a shorter exposure
time. Moving the film farther away from the pinhole will result in a telephoto or narrow angle view and a longer
exposure time.
Pinhole cameras can also be constructed by replacing the lens assembly in a conventional camera with a pinhole. In
particular, compact 35mm cameras whose lens and focusing assembly have been damaged can be reused as pinhole
camerasmaintaining the use of the shutter and film winding mechanisms. As a result of the enormous increase in
f-number while maintaining the same exposure time, one must use a fast film in direct sunshine.
Pinholes (homemade or commercial) can be used in place of the lens on an SLR. Use with a digital SLR allows
metering and composition by trial and error, and is effectively free, so is a popular way to try pinhole
photography.
[14]
Unusual materials have been used to construct pinhole cameras, e.g., a Chinese roast duck.
[15]
by Martin Cheung
Calculating the f-number & required exposure
Pinhole camera
37
A pinhole camera made from an oatmeal box.
The pinhole is in the center. The black plastic
which normally surrounds this camera (see
picture above) has been removed.
A fire hydrant photographed by a pinhole camera
made from a shoe box, exposed on photographic
paper (top). The length of the exposure was 40
seconds. There is noticeable flaring in the
bottom-right corner of the image, likely due to
extraneous light entering the camera box.
The f-number of the camera may be calculated by dividing the distance
from the pinhole to the imaging plane (the focal length) by the
diameter of the pinhole. For example, a camera with a 0.5mm
diameter pinhole, and a 50mm focal length would have an f-number of
50/0.5, or 100 (f/100 in conventional notation).
Due to the large f-number of a pinhole camera, exposures will often
encounter reciprocity failure.
[16]
Once exposure time has exceeded
about 1 second for film or 30 seconds for paper, one must compensate
for the breakdown in linear response of the film/paper to intensity of
illumination by using longer exposures.
Other special features can be built into pinhole cameras such as the
ability to take double images, by using multiple pinholes, or the ability
to take pictures in cylindrical or spherical perspective by curving the
film plane.
These characteristics could be used for creative purposes. Once
considered as an obsolete technique from the early days of
photography, pinhole photography is from time to time a trend in
artistic photography.
Related cameras, image forming devices, or developments from it
include Franke's widefield pinhole camera, the pinspeck camera, and
the pinhead mirror.
NASA (via the NASA Institute for Advanced Concepts) has funded
initial research into the New Worlds Mission project, which proposes
to use a pinhole camera with a diameter of 10 m and focus length of
200,000km to image earth sized planets in other star systems.
Coded apertures
A non-focusing coded-aperture optical system may be thought of as
multiple pinhole cameras in conjunction. By adding pinholes, light
throughput and thus sensitivity are increased. However, multiple
images are formed, usually requiring computer deconvolution.
References
[1] http:/ / www. bbc.co.uk/ news/ in-pictures-22150973
[2] "Light Through the Ages" (http:/ / www-groups. dcs. st-and. ac. uk/ ~history/
HistTopics/ Light_1. html).
[3] Needham, Joseph (1986). Science and Civilization in China: Volume 4, Physics and
Physical Technology, Part 1, Physics. Taipei: Caves Books Ltd. Page 82.
[4] Needham, Joseph (1986). Science and Civilization in China: Volume 4, Physics and
Physical Technology, Part 1, Physics. Taipei: Caves Books Ltd. Page 98.
[5] A reconsideration of Roger Bacon's theory of pinhole images (http:// www.
springerlink. com/ index/ R2717G210K21R7R2. pdf)
[6] History of Photography and the Camera Pinhole Camera to Daguerreotype (http:/ /
inventors. about. com/ library/ inventors/ blphotography.htm)
[7] http:/ / www-history. mcs. st-andrews. ac.uk/ Biographies/ Porta. html
[9] What is a Pinhole Camera? (http:/ / www. pinhole. cz/ en/ pinholecameras/ whatis. html)
Pinhole camera
38
[10] Pinhole photography history (http:// photo.net/ learn/ pinhole/ pinhole)
[11] Rayleigh, (1891) Lord Rayleigh on Pin-hole Photography (http:/ / idea. uwosh. edu/ nick/ rayleigh. pdf) in Philosophical Magazine, vol.31,
pp. 8799 presents his formal analysis, but the layman's formula "pinhole radius = ()" appears in Strutt,J.W. Lord Rayleigh (1891) Some
applications of photography in Nature. Vol.44 p.254.
[12] Equation for calculation with f=1in, using Google for evaluation (http:// www. google. com/ search?q=sqrt(2*1in*550nm)=)
[13] Equation for calculation with f=5cm, using Google for evaluation (http:/ / www. google. com/ search?q=+ sqrt(2*5cm*550nm)=)
[14] http:/ / www. pcw. co.uk/ personal-computer-world/ features/ 2213298/ hands-digital-pinhole-camera
[15] http:/ / www. urbanphoto. net/ blog/ 2010/ 11/ 25/ how-a-roast-duck-sees-chinatown/
[16] http:/ / www. nancybreslin. com/ pinholetech. html
External links
Pinhole Photography by Vladimir Zivkovic (http:/ / www. behance. net/ gallery/
PICTORIALISM-Winter-Pinhole-Photography/ 897715)
Worldwide Pinhole Photography Day website (http:/ / www. pinholeday. org/ gallery/ index. php)
An easy way to convert a DSLR to a pinhole camera (http:/ / www. alistairscott. com/ howto/ pinhole/ )
Pinhole Photography and Camera Design Calculators (http:/ / www. mrpinhole. com/ )
Illustrated history of cinematography (http:/ / www. precinemahistory. net/ )
How to Make and Use a Pinhole Camera (http:/ / www. kodak. com/ global/ en/ consumer/ education/
lessonPlans/ pinholeCamera/ pinholeCanBox. shtml)
Oregon Art Beat: Pinhole Photos by Zeb Andrews (http:/ / watch. opb. org/ video/ 2364990891)
Stereoscopy
Pocket stereoscope with original test image. Used
by military to examine stereoscopic pairs of aerial
photographs.
View of Boston, c. 1860; an early stereoscopic
card for viewing a scene from nature
Stereoscopy (also called stereoscopics or 3D imaging) is a technique
for creating or enhancing the illusion of depth in an image by means of
stereopsis for binocular vision. The word stereoscopy derives from the
Greek "" (stereos), "firm, solid"
[1]
+ "" (skope), "to
look", "to see".
[2]
Most stereoscopic methods present two offset images separately to the
left and right eye of the viewer. These two-dimensional images are
then combined in the brain to give the perception of 3D depth. This
technique is distinguished from 3D displays that display an image in
three full dimensions, allowing the observer to increase information
about the 3-dimensional objects being displayed by head and eye
movements.
Background
Stereoscopy creates the illusion of three-dimensional depth from given
two-dimensional images. Human vision, including the perception of
depth, is a complex process which only begins with the acquisition of
visual information taken in through the eyes; much processing ensues
within the brain, as it strives to make intelligent and meaningful sense
of the raw information provided. One of the very important visual
functions that occur within the brain as it interprets what the eyes see is
Stereoscopy
39
Kaiserpanorama consisted of a multi-station
viewing apparatus and sets of stereo slides.
Patented by A. Fuhrmann around 1890.
[]
Company of ladies watching stereoscopic
photographs, painting by Jacob Spoel, before
1868. A very early depiction of people using a
stereoscope.
that of assessing the relative distances of various objects from the
viewer, and the depth dimension of those same perceived objects. The
brain makes use of a number of cues to determine relative distances
and depth in a perceived scene, including:
[3]
Stereopsis
Accommodation of the eye
Overlapping of one object by another
Subtended visual angle of an object of known size
Linear perspective (convergence of parallel edges)
Vertical position (objects higher in the scene generally tend to be
perceived as further away)
Haze, desaturation, and a shift to bluishness
Change in size of textured pattern detail
(All the above cues, with the exception of the first two, are present in
traditional two-dimensional images such as paintings, photographs, and
television.)
Stereoscopy is the production of the illusion of depth in a photograph,
movie, or other two-dimensional image by presenting a slightly
different image to each eye, and thereby adding the first of these cues
(stereopsis) as well. Both of the 2D offset images are then combined in
the brain to give the perception of 3D depth. It is important to note that
since all points in the image focus at the same plane regardless of their
depth in the original scene, the second cue, focus, is still not duplicated
and therefore the illusion of depth is incomplete. There are also
primarily two effects of stereoscopy that are unnatural for the human
vision: first, the mismatch between convergence and accommodation, caused by the difference between an object's
perceived position in front of or behind the display or screen and the real origin of that light and second, possible
crosstalk between the eyes, caused by imperfect image separation by some methods.
Although the term "3D" is ubiquitously used, it is also important to note that the presentation of dual 2D images is
distinctly different from displaying an image in three full dimensions. The most notable difference is that, in the case
of "3D" displays, the observer's head and eye movement will not increase information about the 3-dimensional
objects being displayed. Holographic displays or volumetric display are examples of displays that do not have this
limitation. Similar to the technology of sound reproduction, in which it is not possible to recreate a full
3-dimensional sound field merely with two stereophonic speakers, it is likewise an overstatement of capability to
refer to dual 2D images as being "3D". The accurate term "stereoscopic" is more cumbersome than the common
misnomer "3D", which has been entrenched after many decades of unquestioned misuse. Although most stereoscopic
displays do not qualify as real 3D display, all real 3D displays are also stereoscopic displays because they meet the
lower criteria as well.
Most 3D displays use this stereoscopic method to convey images. It was first invented by Sir Charles Wheatstone in
1838.
[4][5]
Stereoscopy
40
Wheatstone mirror stereoscope
Wheatstone originally used his stereoscope (a rather bulky device)
[6]
with drawings because photography was not yet available, yet his
original paper seems to foresee the development of a realistic imaging
method:
[7]
For the purposes of illustration I have employed only
outline figures, for had either shading or colouring been
introduced it might be supposed that the effect was wholly
or in part due to these circumstances, whereas by leaving
them out of consideration no room is left to doubt that the entire effect of relief is owing to the
simultaneous perception of the two monocular projections, one on each retina. But if it be required to
obtain the most faithful resemblances of real objects, shadowing and colouring may properly be
employed to heighten the effects. Careful attention would enable an artist to draw and paint the two
component pictures, so as to present to the mind of the observer, in the resultant perception, perfect
identity with the object represented. Flowers, crystals, busts, vases, instruments of various kinds, &c.,
might thus be represented so as not to be distinguished by sight from the real objects themselves.
[4]
Stereoscopy is used in photogrammetry and also for entertainment through the production of stereograms.
Stereoscopy is useful in viewing images rendered from large multi-dimensional data sets such as are produced by
experimental data. An early patent for 3D imaging in cinema and television was granted to physicist Theodor V.
Ionescu in 1936. Modern industrial three-dimensional photography may use 3D scanners to detect and record
three-dimensional information.
[8]
The three-dimensional depth information can be reconstructed from two images
using a computer by corresponding the pixels in the left and right images (e.g.,
[9]
). Solving the Correspondence
problem in the field of Computer Vision aims to create meaningful depth information from two images.
Visual requirements
Anatomically, there are 3 levels of binocular vision required to view stereo images:
1. 1. Simultaneous perception
2. 2. Fusion (binocular 'single' vision)
3. 3. Stereopsis
These functions develop in early childhood. Some people who have strabismus disrupt the development of
stereopsis, however orthoptics treatment can be used to improve binocular vision. A person's stereoacuity determines
the minimum image disparity they can perceive as depth. It is believed that approximately 12% of people are unable
to properly see 3D images, due to a variety of medical conditions.
[10][11]
According to another experiment up to 30%
of people have very weak stereoscopic vision preventing them from depth perception based on stereo disparity. This
nullifies or greatly decreases immersion effects of stereo to them.
[12]
Stereoscopy
41
Side-by-side
"The early bird catches the worm" Stereograph
published in 1900 by North-Western View Co. of
Baraboo, Wisconsin, digitally restored.
Traditional stereoscopic photography consists of creating a 3D illusion
starting from a pair of 2D images, a stereogram. The easiest way to
enhance depth perception in the brain is to provide the eyes of the
viewer with two different images, representing two perspectives of the
same object, with a minor deviation equal or nearly equal to the
perspectives that both eyes naturally receive in binocular vision.
To avoid eyestrain and distortion, each of the two 2D images should be
presented to the viewer so that any object at infinite distance is
perceived by the eye as being straight ahead, the viewer's eyes being
neither crossed nor diverging. When the picture contains no object at infinite distance, such as a horizon or a cloud,
the pictures should be spaced correspondingly closer together.
The principal advantages of side-by-side viewers is the lack of diminution of brightness, allowing the presentation of
images at very high resolution and in full spectrum color, simplicity in creation, and little or no additional image
processing is required. Under some circumstances, such as when a pair of images are presented for freeviewing, no
device or additional optical equipment is needed.
The principal disadvantage of side-by-side viewers is that large image displays are not practical and resolution is
limited by the lesser of the display medium or human eye. This is because as the dimensions of an image are
increased, either the viewing apparatus or viewer themselves must move proportionately further away from it in
order to view it comfortably. Moving closer to an image in order to see more detail would only be possible with
viewing equipment that adjusted to the difference.
Printable cross eye viewer.
Freeviewing
Freeviewing is viewing a side-by-side image pair without using a
viewing device.
[13]
Two methods are available to freeview:
[14][15]
The parallel viewing method uses an image pair with the left-eye
image on the left and the right-eye image on the right. The fused
three-dimensional image appears larger and more distant than the
two actual images, making it possible to convincingly simulate a
life-size scene. The viewer attempts to look through the images with
the eyes substantially parallel, as if looking at the actual scene. This
can be difficult with normal vision because eye focus and binocular
convergence are habitually coordinated. One approach to
decoupling the two functions is to view the image pair extremely
close up with completely relaxed eyes, making no attempt to focus
clearly but simply achieving comfortable stereoscopic fusion of the
two blurry images by the "look-through" approach, and only then
exerting the effort to focus them more clearly, increasing the viewing distance as necessary. Regardless of the
approach used or the image medium, for comfortable viewing and stereoscopic accuracy the size and spacing of
the images should be such that the corresponding points of very distant objects in the scene are separated by the
same distance as the viewer's eyes, but not more; the average interocular distance is about 63mm. Viewing much
more widely separated images is possible, but because the eyes never diverge in normal use it usually requires
some previous training and tends to cause eye strain.
Stereoscopy
42
The cross-eyed viewing method swaps the left and right eye images so that they will be correctly seen cross-eyed,
the left eye viewing the image on the right and vice-versa. The fused three-dimensional image appears to be
smaller and closer than the actual images, so that large objects and scenes appear miniaturized. This method is
usually easier for freeviewing novices. As an aid to fusion, a fingertip can be placed just below the division
between the two images, then slowly brought straight toward the viewer's eyes, keeping the eyes directed at the
fingertip; at a certain distance, a fused three-dimensional image should seem to be hovering just above the finger.
Alternatively, a piece of paper with a small opening cut into it can be used in a similar manner; when correctly
positioned between the image pair and the viewer's eyes, it will seem to frame a small three-dimensional image.
Prismatic, self-masking glasses are now being used by some cross-eyed-view advocates. These reduce the degree of
convergence required and allow large images to be displayed. However, any viewing aid that uses prisms, mirrors or
lenses to assist fusion or focus is simply a type of stereoscope, excluded by the customary definition of freeviewing.
Stereoscopically fusing two separate images without the aid of mirrors or prisms while simultaneously keeping them
in sharp focus without the aid of suitable viewing lenses inevitably requires an unnatural combination of eye
vergence and accommodation. Simple freeviewing therefore cannot accurately reproduce the physiological depth
cues of the real-world viewing experience. Different individuals may experience differing degrees of ease and
comfort in achieving fusion and good focus, as well as differing tendencies to eye fatigue or strain.
Autostereogram
An autostereogram is a single-image stereogram (SIS), designed to create the visual illusion of a three-dimensional
(3D) scene within the human brain from an external two-dimensional image. In order to perceive 3D shapes in these
autostereograms, one must overcome the normally automatic coordination between focusing and vergence.
Stereoscope and stereographic cards
The stereoscope is essentially an instrument in which two photographs of the same object, taken from slightly
different angles, are simultaneously presented, one to each eye. A simple stereoscope is limited in the size of the
image that may be used. A more complex stereoscope uses a pair of horizontal periscope-like devices, allowing the
use of larger images that can present more detailed information in a wider field of view.
Transparency viewers
A View-Master Model E of the 1950s
Some stereoscopes are designed for viewing transparent photographs
on film or glass, known as transparencies or diapositives and
commonly called slides. Some of the earliest stereoscope views, issued
in the 1850s, were on glass. In the early 20th century, 45x107mm and
6x13cm glass slides were common formats for amateur stereo
photography, especially in Europe. In later years, several film-based
formats were in use. The best-known formats for commercially issued
stereo views on film are Tru-Vue, introduced in 1931, and
View-Master, introduced in 1939 and still in production. For amateur
stereo slides, the Stereo Realist format, introduced in 1947, is by far
the most common.
Stereoscopy
43
Head-mounted displays
An HMD with a separate video source displayed
in front of each eye to achieve a stereoscopic
effect
The user typically wears a helmet or glasses with two small LCD or
OLED displays with magnifying lenses, one for each eye. The
technology can be used to show stereo films, images or games, but it
can also be used to create a virtual display. Head-mounted displays
may also be coupled with head-tracking devices, allowing the user to
"look around" the virtual world by moving their head, eliminating the
need for a separate controller. Performing this update quickly enough
to avoid inducing nausea in the user requires a great amount of
computer image processing. If six axis position sensing (direction and
position) is used then wearer may move about within the limitations of
the equipment used. Owing to rapid advancements in computer
graphics and the continuing miniaturization of video and other
equipment these devices are beginning to become available at more reasonable cost.
Head-mounted or wearable glasses may be used to view a see-through image imposed upon the real world view,
creating what is called augmented reality. This is done by reflecting the video images through partially reflective
mirrors. The real world view is seen through the mirrors' reflective surface. Experimental systems have been used for
gaming, where virtual opponents may peek from real windows as a player moves about. This type of system is
expected to have wide application in the maintenance of complex systems, as it can give a technician what is
effectively "x-ray vision" by combining computer graphics rendering of hidden elements with the technician's natural
vision. Additionally, technical data and schematic diagrams may be delivered to this same equipment, eliminating
the need to obtain and carry bulky paper documents.
Augmented stereoscopic vision is also expected to have applications in surgery, as it allows the combination of
radiographic data (CAT scans and MRI imaging) with the surgeon's vision.
Virtual retinal displays
A virtual retinal display (VRD), also known as a retinal scan display (RSD) or retinal projector (RP), not to be
confused with a "Retina Display", is a display technology that draws a raster image (like a television picture) directly
onto the retina of the eye. The user sees what appears to be a conventional display floating in space in front of them.
For true stereoscopy, each eye must be provided with its own discrete display. To produce a virtual display that
occupies a usefully large visual angle but does not involve the use of relatively large lenses or mirrors, the light
source must be very close to the eye. A contact lens incorporating one or more semiconductor light sources is the
form most commonly proposed. As of 2013, the inclusion of suitable light-beam-scanning means in a contact lens is
still very problematic, as is the alternative of embedding a reasonably transparent array of hundreds of thousands (or
millions, for HD resolution) of accurately aligned sources of collimated light.
Stereoscopy
44
A pair of LC shutter glasses used to view XpanD
3D films. The thick frames conceal the
electronics and batteries.
RealD circular polarized glasses
3D viewers
There are two categories of 3D viewer technology, active and passive.
Active viewers have electronics which interact with a display. Passive
viewers filter constant streams of binocular input to the appropriate
eye.
Active
Shutter systems
A shutter system works by openly presenting the image intended for
the left eye while blocking the right eye's view, then presenting the
right-eye image while blocking the left eye, and repeating this so
rapidly that the interruptions do not interfere with the perceived fusion
of the two images into a single 3D image. It generally uses liquid
crystal shutter glasses. Each eye's glass contains a liquid crystal layer
which has the property of becoming dark when voltage is applied,
being otherwise transparent. The glasses are controlled by a timing
signal that allows the glasses to alternately darken over one eye, and
then the other, in synchronization with the refresh rate of the screen.
Passive
Polarization systems
To present stereoscopic pictures, two images are projected superimposed onto the same screen through polarizing
filters or presented on a display with polarized filters. For projection, a silver screen is used so that polarization is
preserved. The viewer wears low-cost eyeglasses which also contain a pair of opposite polarizing filters. As each
filter only passes light which is similarly polarized and blocks the opposite polarized light, each eye only sees one of
the images, and the effect is achieved.
Interference filter systems
This technique uses specific wavelengths of red, green, and blue for the right eye, and different wavelengths of red,
green, and blue for the left eye. Eyeglasses which filter out the very specific wavelengths allow the wearer to see a
full color 3D image. It is also known as spectral comb filtering or wavelength multiplex visualization or
super-anaglyph. Dolby 3D uses this principle. The Omega 3D/Panavision 3D system has also used an improved
version of this technology
[16]
In June 2012 the Omega 3D/Panavision 3D system was discontinued by DPVO
Theatrical, who marketed it on behalf of Panavision, citing challenging global economic and 3D market
conditions.
[17]
Although DPVO dissolved its business operations, Omega Optical continues promoting and selling
3D systems to non-theatrical markets. Omega Opticals 3D system contains projection filters and 3D glasses. In
addition to the passive stereoscopic 3D system, Omega Optical has produced enhanced anaglyph 3D glasses. The
Omegas red/cyan anaglyph glasses use complex metal oxide thin film coatings and high quality annealed glass
optics.
Stereoscopy
45
Anaglyph 3D glasses
Color anaglyph systems
Anaglyph 3D is the name given to the stereoscopic 3D effect achieved
by means of encoding each eye's image using filters of different
(usually chromatically opposite) colors, typically red and cyan.
Anaglyph 3D images contain two differently filtered colored images,
one for each eye. When viewed through the "color-coded" "anaglyph
glasses", each of the two images reaches one eye, revealing an
integrated stereoscopic image. The visual cortex of the brain fuses this
into perception of a three dimensional scene or composition.
Chromadepth system
ChromaDepth glasses with prism-like film
The ChromaDepth procedure of American Paper Optics is based on the
fact that with a prism, colors are separated by varying degrees. The
ChromaDepth eyeglasses contain special view foils, which consist of
microscopically small prisms. This causes the image to be translated a
certain amount that depends on its color. If one uses a prism foil now
with one eye but not on the other eye, then the two seen pictures
depending upon color are more or less widely separated. The brain produces the spatial impression from this
difference. The advantage of this technology consists above all of the fact that one can regard ChromaDepth pictures
also without eyeglasses (thus two-dimensional) problem-free (unlike with two-color anaglyph). However the colors
are only limitedly selectable, since they contain the depth information of the picture. If one changes the color of an
object, then its observed distance will also be changed.
[citation needed]
KMQ stereo prismatic viewer with openKMQ
plastics extensions
Pulfrich method
The Pulfrich effect is based on the phenomenon of the human eye
processing images more slowly when there is less light, as when
looking through a dark lens. Because the Pulfrich effect depends on
motion in a particular direction to instigate the illusion of depth, it is
not useful as a general stereoscopic technique. For example, it cannot
be used to show a stationary object apparently extending into or out of
the screen; similarly, objects moving vertically will not be seen as
moving in depth. Incidental movement of objects will create spurious
artifacts, and these incidental effects will be seen as artificial depth not
related to actual depth in the scene.
Stereoscopy
46
Over/under format
Stereoscopic viewing is achieved by placing an image pair one above one another. Special viewers are made for
over/under format that tilt the right eyesight slightly up and the left eyesight slightly down. The most common one
with mirrors is the View Magic. Another with prismatic glasses is the KMQ viewer.
[]
A recent usage of this
technique is the openKMQ project.
[]
Other display methods without viewers
Autostereoscopy
The Nintendo 3DS uses parallax barrier
autostereoscopy to display a 3D image.
Autostereoscopic display technologies use optical components in the
display, rather than worn by the user, to enable each eye to see a
different image. Because headgear is not required, it is also called
"glasses-free 3D". The optics split the images directionally into the
viewer's eyes, so the display viewing geometry requires limited head
positions that will achieve the stereoscopic effect. Automultiscopic
displays provide multiple views of the same scene, rather than just two.
Each view is visible from a different range of positions in front of the
display. This allows the viewer to move left-right in front of the
display and see the correct view from any position. The technology
includes two broad classes of displays: those that use head-tracking to
ensure that each of the viewer's two eyes sees a different image on the
screen, and those that display multiple views so that the display does
not need to know where the viewers' eyes are directed. Examples of autostereoscopic displays technology include
lenticular lens, parallax barrier, volumetric display, holography and light field displays.
Holography
Laser holography, in its original "pure" form of the photographic transmission hologram, is the only technology yet
created which can reproduce an object or scene with such complete realism that the reproduction is visually
indistinguishable from the original, given the original lighting conditions. It creates a light field identical to that
which emanated from the original scene, with parallax about all axes and a very wide viewing angle. The eye
differentially focuses objects at different distances and subject detail is preserved down to the microscopic level. The
effect is exactly like looking through a window. Unfortunately, this "pure" form requires the subject to be laser-lit
and completely motionlessto within a minor fraction of the wavelength of lightduring the photographic
exposure, and laser light must be used to properly view the results. Most people have never seen a laser-lit
transmission hologram. The types of holograms commonly encountered have seriously compromised image quality
so that ordinary white light can be used for viewing, and non-holographic intermediate imaging processes are almost
always resorted to, as an alternative to using powerful and hazardous pulsed lasers, when living subjects are
photographed.
Although the original photographic processes have proven impractical for general use, the combination of
computer-generated holograms (CGH) and optoelectronic holographic displays, both under development for many
years, has the potential to transform the half-century-old pipe dream of holographic 3D television into a reality; so
far, however, the large amount of calculation required to generate just one detailed hologram, and the huge
bandwidth required to transmit a stream of them, have confined this technology to the research laboratory.
Stereoscopy
47
Volumetric displays
Volumetric displays use some physical mechanism to display points of light within a volume. Such displays use
voxels instead of pixels. Volumetric displays include multiplanar displays, which have multiple display planes
stacked up, and rotating panel displays, where a rotating panel sweeps out a volume.
Other technologies have been developed to project light dots in the air above a device. An infrared laser is focused
on the destination in space, generating a small bubble of plasma which emits visible light.
Integral imaging
Integral imaging is an autostereoscopic or multiscopic 3D display, meaning that it displays a 3D image without the
use of special glasses on the part of the viewer. It achieves this by placing an array of microlenses (similar to a
lenticular lens) in front of the image, where each lens looks different depending on viewing angle. Thus rather than
displaying a 2D image that looks the same from every direction, it reproduces a 4D light field, creating stereo images
that exhibit parallax when the viewer moves.
Wiggle stereography
Wiggle stereoscopy is an image display technique achieved by quickly alternating display of left and right sides of a
stereogram. Found in animated GIF format on the web. Online examples are visible in the New-York Public Library
stereogram collection
[18]
. The technique is also known as "Piku-Piku".
[19]
Stereo photography techniques
Film photography
The Stereo Realist, which defined a new stereo format. The
middle lens is for view-finding.
It is necessary to take two photographs from different
horizontal positions to get a true stereoscopic image pair.
This can be done with two separate side-by-side cameras;
with one camera moved from one position to another
between exposures; with one camera and a single exposure
by means of an attached mirror or prism arrangement that
presents a stereoscopic image pair to the camera lens; or with
a stereo camera incorporating two or more side-by-side
lenses.
As part of a wider 3-D craze that swept the US in 1953,
stereoscopic photography enjoyed a surge of popularity and a
new generation of stereoscopic cameras appeared on the
market. More compact and convenient than their pre-World
War II predecessors, they adopted the increasingly popular 135 film (35mm) format that allowed the use of
Kodachrome color film, which produced color transparencies ("slides") instead of prints on paper. The relative
novelty of Kodachrome's vivid colors and the realism of 3-D were each attractive individually, but the astonishingly
lifelike effect of the two combined proved irresistible to many consumers. The Stereo Realist camera, introduced in
1947, was the pioneer. Already advertised with celebrity endorsements and
Stereoscopy
48
Sputnik stereo camera (Soviet Union, 1960s).
Although there are three lenses present, only the
lower two are used for the photograph the third
lens serves as a viewfinder for composition. The
Sputnik produces two side-by-side square images
on 120 film.
well-established when the surge arrived in 1953, it was widely copied
but maintained its lead. Its 5P (five film perforations per image) format
was adopted as a standard by most of its competitors, including Kodak.
The new cameras were marketed with corresponding two-lensed
Realist-format slide viewers, which typically had a built-in light source
and adjustable optics. With only these two items the owner could
capture, relive and share multicolored and stereoscopically preserved
memories. For group viewing and perhaps even greater realism, a
polarized stereoscopic slide projector and silver screen could be added
to the system. The popularity of stereoscopic photography waned along
with the 1950s 3-D fad, but not so quickly or completely. Subsequent
decades found new users replenishing the ranks of loyal devotees, and
even today, despite the general transition from film to digital and from
slide viewing and projection to slide scanning and video display, some
of this sturdy equipment is still in use by a small core of enthusiasts of
all ages.
The 1980s saw a minor revival of stereoscopic photography when several point-and-shoot stereo cameras were
introduced. Most of these cameras suffered from poor optics and plastic construction, and were designed to produce
lenticular prints, a format which never gained wide acceptance, so they never gained the popularity of the 1950s
stereo cameras.
Digital photography
The beginning of the 21st century marked the coming of the age of digital photography. Stereo lenses were
introduced which could turn an ordinary film camera into a stereo camera by using a special double lens to take two
images and direct them through a single lens to capture them side by side on the film. Although current digital stereo
cameras cost hundreds of dollars,
[20]
cheaper models also exist, for example those produced by the company Loreo.
It is also possible to create a twin camera rig, together with a "shepherd" device to synchronize the shutter and flash
of the two cameras. By mounting two cameras on a bracket, spaced a bit, with a mechanism to make both take
pictures at the same time. Newer cameras are even being used to shoot "step video" 3D slide shows with many
pictures almost like a 3D motion picture if viewed properly. A modern camera can take ten pictures per second, with
images that greatly exceed HDTV resolution.
If anything is in motion within the field of view, it is necessary to take both images at once, either through use of a
specialized two-lens camera, or by using two identical cameras, operated as close as possible to the same moment.
A single camera can also be used if the subject remains perfectly still (such as an object in a museum display). Two
exposures are required. The camera can be moved on a sliding bar for offset, or with practice, the photographer can
simply shift the camera while holding it straight and level. This method of taking stereo photos is sometimes referred
to as the "Cha-Cha" or "Rock and Roll" method.
[21]
It is also sometimes referred to as the "astronaut shuffle"
because it was used to take stereo pictures on the surface of the moon using normal monoscopic equipment.
[22]
For the most natural looking stereo most stereographers move the camera about 65mm or the distance between the
eyes,
[]
but some experiment with other distances. A good rule of thumb is to shift sideways 1/30th of the distance to
the closest subject for 'side by side' display, or just 1/60th if the image is to be also used for color anaglyph or
anachrome image display. For example, when enhanced depth beyond natural vision is desired and a photo of a
person in front of a house is being taken, and the person is thirty feet away, then the camera should be moved 1 foot
between shots.
[]
Stereoscopy
49
The stereo effect is not significantly diminished by slight pan or rotation between images. In fact slight rotation
inwards (also called 'toe in') can be beneficial. Bear in mind that both images should show the same objects in the
scene (just from different angles) if a tree is on the edge of one image but out of view in the other image, then it
will appear in a ghostly, semi-transparent way to the viewer, which is distracting and uncomfortable. Therefore, the
images are cropped so they completely overlap, or the cameras 'toed-in' so that the images completely overlap
without having to discard any of the images. However, too much 'toe-in' can cause 'keystoning' and eye strain for
reasons best described here.
[23]
Digital stereo bases (baselines)
There are different cameras with different stereobase (distance between the two camera lenses) in the not
professional market of 3D digital cameras used for video and also for stills:
10mm Panasonic 3 D Lumix H-FT012 lens (for the GH2, GF2, GF3, GF5 cams and also for the hybrid W8 cam).
12mm Praktica and Medion 3D (two clones of the DXG-5D8 cam).
20mm Sony Blogie 3D.
23mm Loreo 3D Macro lens.
25mm LG Optimus 3D and LG Optimus 3D MAX smartphones and the close-up macro adapter for the W1 and W3
Fujifilm cams.
28mm Sharp Aquos SH80F smartphone and the Toshiba Camileo z100 camcorder.
30mm Panasonic 3D1 camera.
32mm HTC EVO 3D smartphone.
35mm JVC TD1, DXG-5G2V and Vivitar 790 HD (only for anagliph stills and video) camcorders.
40mm Aiptek I2, Aiptek IS2, Aiptek IH3 and Viewsonic 3D cams.
50mm Loreo for full frame cams, and the 3D FUN cam of 3dInlife.
55mm SVP dc-3D-80 cam (parallel & anagliph, stills & video).
60mm Vivitar 3D cam (only for anagliph pictures.
75mm Fujifilm W3 cam.
77mm Fujifilm W1 cam.
88mm Loreo 3D lens for digital cams.
140mm Cyclopital3D base extender for the JVC TD1 and Sony TD10.
200mm Cyclopital3D base extender for the Panasonic AG-3DA1.
225mm Cyclopital3D base extender for the Fujifilm W1 and W3 cams.
Base line selection
Fujifilm FinePix Real 3D W3
For general purpose stereo photography, where the goal is to duplicate
natural human vision and give a visual impression as close as possible
to actually being there, the correct baseline (distance between where
the right and left images are taken) would be the same as the distance
between the eyes.
[24]
When images taken with such a baseline are
viewed using a viewing method that duplicates the conditions under
which the picture is taken then the result would be an image pretty
much the same as what would be seen at the site the photo was taken.
This could be described as "ortho stereo."
Stereoscopy
50
An example would be the Realist format that was so popular in the late 1940s to mid-1950s and is still being used by
some today. When these images are viewed using high quality viewers, or seen with a properly set up projector, the
impression is, indeed, very close to being at the site of photography.
The baseline used in such cases will be about 50mm to 80mm. This is what is generally referred to as a "normal"
baseline, used in most stereo photography. There are, however, situations where it might be desirable to use a longer
or shorter baseline. The factors to consider include the viewing method to be used and the goal in taking the picture.
Note that the concept of baseline also applies to other branches of stereography, such as stereo drawings and
computer generated stereo images, but it involves the point of view chosen rather than actual physical separation of
cameras or lenses.
Longer base line for distant objects "Hyper Stereo"
If a stereo picture is taken of a large, distant object such as a mountain or a large building using a normal base it will
appear to be flat.
[]
This is in keeping with normal human vision, it would look flat if one were actually there, but if
the object looks flat, there doesn't seem to be any point in taking a stereo picture, as it will simply seem to be behind
a stereo window, with no depth in the scene itself, much like looking at a flat photograph from a distance.
One way of dealing with this situation is to include a foreground object to add depth interest and enhance the feeling
of "being there", and this is the advice commonly given to novice stereographers.
[25][26]
Caution must be used,
however, to ensure that the foreground object is not too prominent, and appears to be a natural part of the scene,
otherwise it will seem to become the subject with the distant object being merely the background.
[27]
In cases like
this, if the picture is just one of a series with other pictures showing more dramatic depth, it might make sense just to
leave it flat, but behind a window.
[27]
Midtown manhattan stereo photograph crosseyed ( )
Hyperstereo example taken out from airplane while flying over
Greenland and arranged for Crosseyed viewing
For making stereo images featuring only a distant
object (e.g., a mountain with foothills), the camera
positions can be separated by a larger distance (called
the "interaxial" or stereo base, often mistakenly called
"interocular") than the adult human norm of 6265mm.
This will effectively render the captured image as
though it was seen by a giant, and thus will enhance the
depth perception of these distant objects, and reduce
the apparent scale of the scene proportionately.
[28]
However, in this case care must be taken not to bring
objects in the close foreground too close to the viewer,
as they will show excessive parallax and can
complicate stereo window adjustment.
There are two main ways to accomplish this. One is to
use two cameras separated by the required distance, the
other is to shift a single camera the required distance
between shots.
The shift method has been used with cameras such as the Stereo Realist to take hypers, either by taking two pairs and
selecting the best frames, or by alternately capping each lens and recocking the shutter.
[][29]
Stereoscopy
51
Moon stereo from 1897 taken using libration. Anaglyph, red left.
3D red cyan glasses are recommended to view this image
correctly.
It is also possible to take hyperstereo pictures using an
ordinary single lens camera aiming out an airplane.
One must be careful, however, about movement of
clouds between shots.
[30]
It has even been suggested that a version of hyperstereo
could be used to help pilots fly planes.
[31]
In such situations, where an ortho stereo viewing
method is used, a common rule of thumb is the 1:30
rule.
[32]
This means that the baseline will be equal to
1/30 of the distance to the nearest object included in the
photograph.
The results of hyperstereo can be quite
impressive,
[33][34][35]
and examples of hyperstereo can
be found in vintage views.
[36]
This technique can be applied to 3D imaging of the
Moon: one picture is taken at moonrise, the other at
moonset, as the face of the Moon is centered towards
the center of the Earth and the diurnal rotation carries
the photographer around the perimeter, though the results are rather poor,
[37]
and much better results can be obtained
using alternative techniques.
[37]
This is why high quality published stereos of the moon are done using libration,
[38][39]

[40][41]
the slight "wobbling"
of the moon on its axis relative to the earth.
[42]
Similar techniques were used late in the 19th century to take stereo
views of Mars and other astronomical subjects.
[42]
Limitations of hyperstereo
Illustration of parallax multiplication limits with A at 30 and 2000
feet
Vertical alignment can become a big problem,
especially if the terrain on which the two camera
positions are placed is uneven.
Movement of objects in the scene can make syncing
two widely separated cameras a nightmare. When a
single camera is moved between two positions even
subtle movements such as plants blowing in the wind
and the movement of clouds can become a problem.
[29]
The wider the baseline, the more of a problem this
becomes.
Pictures taken in this fashion take on the appearance of
a miniature model, taken from a short
distance,
[43][44][45]
and those not familiar with such
pictures often cannot be convinced that it is the real object. This is because we cannot see depth when looking at
such scenes in real life and our brains aren't equipped to deal with the artificial depth created by such techniques, and
so our minds tell us it must be a smaller object viewed from a short distance, which would have depth. Though most
eventually realize it is, indeed, an image of a large object from far away, many find the effect bothersome.
[46]
This
doesn't rule out using such techniques, but it is one of the factors that need to be considered when deciding whether
or not such a technique should be used.
Stereoscopy
52
In movies and other forms of "3D" entertainment, hyperstereo may be used to simulate the viewpoint of a giant, with
eyes a hundred feet apart. The miniaturization would be just what the photographer (or designer in the case of
drawings/computer generated images) had in mind. On the other hand, in the case of a massive ship flying through
space the impression that it is a miniature model is probably not what the film makers intended!
Hyper stereo can also lead to cardboarding, an effect that creates stereos in which different objects seem well
separated in depth, but the objects themselves seem flat. This is because parallax is quantized.
[47]
Illustration of the limits of parallax multiplication, refer to image at left. Ortho viewing method assumed. The line
represents the Z axis, so imagine that it is laying flat and stretching into the distance. If the camera is at X point A is
on an object at 30 feet. Point B is on an object at 200 feet and point C is on the same object but 1inch behind B.
Point D is on an object 250 feet away. With a normal baseline point A is clearly in the foreground, with B,C, and D
all at stereo infinity. With a one foot base line, which multiplies the parallax, there will be enough parallax to
separate all four points, though the depth in the object containing B and C will still be subtle. If this object is the
main subject, we may consider a baseline of 6 feet 8inches but then the object at A would need to be cropped out.
Now imagine that the camera is point Y, now the object at A is at 2,000 feet, point B is on an object at 2,170 feet C
is a point on the same object 1inch behind B. Point D is on an object at 2,220 feet. With a normal baseline, all four
points are now at stereo infinity. With a 67 foot basline, the multiplied parallax allows us to see that all three objects
are on different planes, yet points B and C, on the same object, appear to be on the same plane and all three objects
appear flat. This is because there are discrete units of parallax, so at 2,170 feet the parallax between B and C is zero
and zero multiplied by any number is still zero.
Small anaglyphed image
3D red cyan glasses are
recommended to view this
image correctly.
A practical example
In the red-cyan anaglyph example below, a ten-meter baseline atop the roof ridge of a
house was used to image the mountain. The two foothill ridges are about four miles
(6.5km) distant and are separated in depth from each other and the background. The
baseline is still too short to resolve the depth of the two more distant major peaks
from each other. Owing to various trees that appeared in only one of the images the
final image had to be severely cropped at each side and the bottom.
In the wider image, taken from a different location, a single camera was walked about
one hundred feet (30m) between pictures. The images were converted to
monochrome before combination.(below)
Stereoscopy
55
In recent years cameras have been produced which are designed to stereograph subjects 10" to 20" using print film,
with a 27mm baseline.
[50]
Another technique, usable with fixed base cameras such as the Fujifilm FinePix Real 3D
W1/W3 is to back off from the subject and use the zoom function to zoom to a closer view, such as was done in the
image of a cake. This has the effect of reducing the effective baseline. Similar techniques could be used with paired
digital cameras.
Another way to take images of very small objects, "extreme macro", is to use an ordinary flatbed scanner. This is a
variation on the shift technique in which the object is turned upside down and placed on the scanner, scanned, moved
over and scanned again. This produces stereos of a range objects as large as about 6" across down to objects as small
as a carrot seed. This technique goes back to at least 1995. See the article Scanography for more details.
In stereo drawings and computer generated stereo images a smaller than normal baseline may be built into the
constructed images to simulate a "bug's eye" view of the scene.
Baseline tailored to viewing method
How far the picture is viewed from requires a certain separation between the cameras. This separation is called stereo
base or stereo base line and results from the ratio of the distance to the image to the distance between the eyes
(usually about 2.5inches). In any case the farther the screen is viewed from the more the image will pop out. The
closer the screen is viewed from the flatter it will appear. Personal anatomical differences can be compensated for by
moving closer or farther from the screen.
To provide close emulation of natural vision for images viewed on a computer monitor, a fixed stereo base of 6cm
might be appropriate. This will vary depending on the size of the monitor and the viewing distance. For hyper stereo,
a ratio smaller than 1:30 could be used. For example if a stereo image is to be viewed on a computer monitor from
a distance of 1000mm there will be an eye to view ratio of 1000/63 or about 16. To set the cameras the appropriate
distance apart for the desired effect, the distance to the subject (say a person at a distance from the cameras of 3
meters) is divided by 16 which yields a stereo base of 188mm between the cameras.
However, images optimized for a small screen viewed from a short distance will show excessive parallax when
viewed with more ortho methods, such as a projected image or a head mounted display, possibly causing eyestrain
and headaches, or doubling, so pictures optimized for this viewing method may not be usable with other methods.
Where images may also be used for anaglyph display a narrower base, say 40mm will allow for less ghosting in the
display.
Variable base for "geometric stereo"
As mentioned previously, the goal of the photographer may be a reason for using a baseline that is larger than
normal. Such is the case when, instead of trying to achieve a close emulation to natural vision, a stereographer may
be trying to achieve geometric perfection. This approach means that objects are shown with the shape they actually
have, rather than the way they are seen by humans.
Objects at 25 to 30 feet, instead of having the subtle depth that one being there would see, or what would be recorded
with a normal baseline, will have the much more dramatic depth that would be seen from 7 to 10 feet. So instead
seeing objects as one would with eyes 2 1/2" apart, they would be seen as they would appear if one's eyes were 12"
apart. In other words, the baseline is chosen to produce the same depth effect, regardless of the distance from the
subject. As with true ortho, this effect is impossible to achieve in a literal sense, since different objects in the scene
will be at different distances and will thus show different amounts of parallax, but the geometric stereographer, like
the ortho stereographer attempts to come as close as possible.
Achieving this could be as simple as using the 1:30 rule to find a custom base for every shot, regardless of distance,
or it could involve using a more complicated formula.
[51]
Stereoscopy
56
This could be thought of as a form of hyperstereo,
[52]
but less extreme. As a result, it has all of the same limitations
of hyperstereo. When objects are given enhanced depth, but not magnified to take up a larger portion of the view,
there is a certain miniaturization effect. Of course, this may be exactly what the stereographer has in mind.
While geometric stereo neither attempts nor achieves a close emulation of natural vision, there are valid reasons for
this approach. It does, however, represent a very specialized branch of stereography.
Precise stereoscopic baseline calculation methods
Recent research has led to precise methods for calculating the stereoscopic camera baseline.
[53]
These techniques
consider the geometry of the display/viewer and scene/camera spaces independently and can be used to reliably
calculate a mapping of the scene depth being captured to a comfortable display depth budget. This frees up the
photographer to place their camera wherever they wish to achieve the desired composition and then use the baseline
calculator to work out the camera inter-axial separation required to produce the desired effect.
This approach means there is no guess work in the stereoscopic setup once a small set of parameters have been
measured, it can be implemented for photography and computer graphics and the methods can be easily implemented
in a software tool.
Multi-rig stereoscopic cameras
The precise methods for camera control have also allowed the development of multi-rig stereoscopic cameras where
different slices of scene depth are captured using different inter-axial settings,
[54]
the images of the slices are then
composed together to form the final stereoscopic image pair. This allows important regions of a scene to be given
better stereoscopic representation while less important regions are assigned less of the depth budget. It provides
stereographers with a way to manage composition within the limited depth budget of each individual display
technology.
Stereo Window
For any branch of stereoscopy the concept of the stereo window is important. If a scene is viewed through a window
the entire scene would normally be behind the window, if the scene is distant, it would be some distance behind the
window, if it is nearby, it would appear to be just beyond the window. An object smaller than the window itself
could even go through the window and appear partially or completely in front of it. The same applies to a part of a
larger object that is smaller than the window.
The goal of setting the stereo window is to duplicate this effect.
To truly understand the concept of window adjustment it is necessary to understand where the stereo window itself
is. In the case of projected stereo, including "3D" movies, the window would be the surface of the screen. With
printed material the window is at the surface of the paper. When stereo images are seen by looking into a viewer the
window is at the position of the frame. In the case of Virtual Reality the window seems to disappear as the scene
becomes truly immersive.
In the case of paired images, moving the images further apart will move the entire scene back, moving the images
closer together will move the scene forward. Note that this does not affect the relative positions of objects within the
scene, just their position relative to the window. Similar principles apply to anaglyph images and other stereoscopy
techniques.
There are several considerations in deciding where to place the scene relative to the window.
First, in the case of an actual physical window, the left eye will see less of the left side of the scene and the right eye
will see less of the right side of the scene, because the view is partly blocked by the window frame. This principle is
known as "less to the left on the left" or 3L, and is often used as a guide when adjusting the stereo window where all
objects are to appear behind the window. When the images are moved further apart, the outer edges are cropped by
Stereoscopy
57
the same amount, thus duplicating the effect of a window frame.
Another consideration involves deciding where individual objects are placed relative to the window. It would be
normal for the frame of an actual window to partly overlap or "cut off" an object that is behind the window. Thus an
object behind the stereo window might be partly cut off by the frame or side of the stereo window. So the stereo
window is often adjusted to place objects cut off by window behind the window. If an object, or part of an object, is
not cut off by the window then it could be placed in front of it and the stereo window may be adjusted with this in
mind. This effect is how swords, bugs, flashlights, etc. often seem to "come off the screen" in 3D movies.
If an object which is cut off by the window is placed in front of it, an effect results that is somewhat unnatural and is
usually considered undesirable, this is often called a "window violation". This can best be understood by returning to
the analogy of an actual physical window. An object in front of the window would not be cut off by the window
frame but would, rather, continue to the right and/or left of it. This can't be duplicated in stereography techniques
other than Virtual Reality so the stereo window will normally be adjusted to avoid window violations. There are,
however, circumstances where they could be considered permissible.
A third consideration is viewing comfort. If the window is adjusted too far back the right and left images of distant
parts of the scene may be more than 2.5" apart, requiring that the viewers eyes diverge in order to fuse them. This
results in image doubling and/or viewer discomfort. In such cases a compromise is necessary between viewing
comfort and the avoidance of window violations.
In stereo photography window adjustments is accomplished by shifting/cropping the images, in other form of
stereoscopy such as drawings and computer generated images the window is built into the design of the images as
they are generated. It is by design that in CGI movies certain images are behind the screen whereas others are in
front of it.
Bibliography
Footnotes
[1] Tufts.edu (http:// www. perseus.tufts. edu/ hopper/ text?doc=Perseus:text:1999. 04. 0057:entry=stereo/ s), Henry George Liddell,
Robert Scott, A Greek-English Lexicon, on Perseus Digital Library
[2] (http:/ / www. perseus. tufts. edu/ hopper/ text?doc=Perseus:text:1999. 04. 0057:entry=skope/ w), Henry George Liddell, Robert
Scott, A Greek-English Lexicon, on Perseus Digital Library
[3] Flight Simulation, J. M. Rolfe and K. J. Staples, Cambridge University Press, 1986, page 134
[4] Contributions to the Physiology of Vision.Part the First. On some remarkable, and hitherto unobserved, Phenomena of Binocular Vision.
By CHARLES WHEATSTONE, F.R.S., Professor of Experimental Philosophy in King's College, London. Stereoscopy.com (http:/ / www.
stereoscopy. com/ library/ wheatstone-paper1838.html)
[5] [5] Welling, William. Photography in America, page 23
[6] Stereo Realist Manual, p. 375.
[7] Stereo Realist Manual, pp. 377379.
[8] Fay Huang, Reinhard Klette, and Karsten Scheibe: Panoramic Imaging (Sensor-Line Cameras and Laser Range-Finders). Wiley & Sons,
Chichester, 2008
[13] The Logical Approach to Seeing 3D Pictures (http:/ / www. vision3d. com/ 3views. html). www.vision3d.com by Optometrists Network.
Retrieved 2009-08-21
[14] How To Freeview Stereo (3D) Images (http:/ / www.angelfire. com/ ca/ erker/ freeview. html). Greg Erker. Retrieved 2009-08-21
[15] How to View Photos on This Site (http:/ / www.3dphoto. net/ text/ viewing/ technique. html). Stereo Photography The World in 3D.
Retrieved 2009-08-21
[16] [16] "Seeing is believing""; Cinema Technology, Vol 24, No.1 March 2011
[17] http:/ / www. dpvotheatrical. com/
[18] http:/ / stereo. nypl.org/ create
[19] http:/ / www. shortcourses. com/ stereo/ stereo1-17. html
[22] Stereo World, National Stereoscopic Association Vol 17 #3 pp. 410
[25] Stereo Realist Manual, p. 27.
[26] Stereo Realist Manual, p. 261.
[27] Stereo Realist Manual, p. 156.
Stereoscopy
58
[29] [29] Stereo World Volume 37 #1 Inside Front Cover
[30] [30] Stereoworld Vol 21 #1 March/April 1994 IFC, 51
[31] Stereoworld Vol 16 #1 March/April 1989 pp 3637
[33] Stereoworld Vol 16 #2 May/June 1989 pp. 2021
[34] Stereoworld Vol 8 #1 March/April 1981 pp. 1617
[35] Stereoworld Vol 31 #6 May/June 2006 pp. 1622
[36] Stereoworld Vol 17 #5 Nov/DEC 1990 pp. 3233
[37] Stereo Lunar Photos by John C. Ballou (http:/ / home. comcast. net/ ~jlballou/ LunarStereo/ index. html) An in depth looks at moon stereos
with examples using several techniques
[38] Stereoworld Vol 23 #2 May/June 1996 pp. 2530
[41] London Stereoscopic Company Official Web Site (http:/ / www. londonstereo. com/ stereophotography2. html) a more indepth
explanation
[42] Stereoworld Vol 15 #3 July/August 1988 pp. 2530
[44] The Vision of Hyperspace, Arthur Chandler, 1975, Stereo World , vol 2 #5 pp. 23, 12
[46] [46] Hyperspace a comment, Paul Wing, 1976, Stereo World , vol 2 #6 page 2
[48] Willke & Zakowski
[49] [49] Simmons
References
Sources
Simmons, Gordon (March/April 1996). "Clarence G. Henning: The Man Behind the Macro". Stereo World 23 (1):
3743.
Willke, Mark A.; Zakowski, Ron (March/April 1996). "A Close Look into the Realist Macro Stereo System".
Stereo World 23 (1): 1435.
Morgan, Willard D.; Lester, Henry M. (October 1954). Stereo Realist Manual. and 14 contributors. New York:
Morgan & Lester. OCLC 789470 (http:/ / www. worldcat. org/ oclc/ 789470).
External links
Stereoscopy (http:/ / www. dmoz. org/ Arts/ Photography/ Techniques_and_Styles/ 3D/ / ) at the Open Directory
Project
The Quantitative Analysis of Stereoscopic Effect (http:/ / www. vicgi. com/
lenticular-printing-quantitive-analysis. html)
Durham Visualization Laboratory stereoscopic imaging methods and software tools (http:/ / www. binocularity.
org)
University of Washington Libraries Digital Collections Stereocard Collection (http:/ / content. lib. washington.
edu/ stereoweb/ )
Stereographic Views of Louisville and Beyond, 1850s1930 (http:/ / digital. library. louisville. edu/ cdm/
landingpage/ collection/ stereographs/ ) from the University of Louisville Libraries
Stereoscopy (http:/ / www. flickr. com/ photos/ boston_public_library/ sets/ 72157604192771132/ ) on Flickr
Extremely rare and detailed Stereoscopic 3D scenes (http:/ / www. panoramio. com/ user/ 63737/ tags/ 3D)
International Stereoscopic Union (http:/ / www. ISU3D. org)
3D STEREO PORTAL Videos & Photos Collection (http:/ / www. 3dstreaming. it)
American University in Cairo Rare Books and Special Collections Digital Library Underwood & Underwood
Egypt Stereoviews Collection (http:/ / digitalcollections. aucegypt. edu/ cdm/ landingpage/ collection/
p15795coll8)
Views of California and the West, ca. 18671903 (http:/ / www. oac. cdlib. org/ findaid/ ark:/ 13030/ tf3489n9sv/
), The Bancroft Library
The Ten Commandments of Stereoscopy (http:/ / www. stereoscopynews. com/ download/ software/
923-qthe-ten-commandments-of-stereoscopy. html), article about taking good stereoscopy images (photo and
Stereoscopy
59
video)
Moriarty, Philip. "3D Glasses" (http:/ / www. sixtysymbols. com/ videos/ 3d. htm). Sixty Symbols. Brady Haran
for the University of Nottingham.
Article Sources and Contributors
60
Article Sources and Contributors
Zone System Source: http://en.wikipedia.org/w/index.php?oldid=564869139 Contributors: Animum, Bhound89, Binksternet, Boneheadmx, Chevreul, Chris the speller, Chris1122, Chris8535,
Chrisjohnson, Christoph Braun, Cullen328, Dicklyon, Discpad, Doradus, DragonLord, Dstrehl, Duk, Dziadeck, Eeekster, Exposeitright, Eyesclosed, Fences and windows, Fi9, Flash19901,
Fourohfour, GregorB, Grendelkhan, Hertz1888, Hirumon, Hkiahsebag, Hooperbloob, Imroy, Innov8or, Jacopo188, Jcasey130, JeffConrad, Jeremyjgray, Jfpierce, Lambiam, Lik-photo, Lopifalko,
Lucassd, Mactographer, Madness, Manamarak, Michael Hardy, Mindmatrix, NathanHawking, Nbarth, Onceuponastar, Poccil, Pol098, Queenvictoria, RJASE1, Racklever, Rogerd, Sander123,
Shirtwaist, Snowmonster, Soler97, Someguy1221, Springbk, Srleffler, Ssolomon, SteveHopson, Sweetlew72, Targa86, TedE, Tenebrae, The wub, Thumperward, Tms, Vegaswikian,
Ventrilqstman, Vfp15, William Avery, Wolfmofo2754, YUL89YYZ, Yuzerid, 99 anonymous edits
High-dynamic-range imaging Source: http://en.wikipedia.org/w/index.php?oldid=569553492 Contributors: 25, 345Kai, Aashish.59, Abphoto, Acidburn24m, Adoado, Aflint, Agateller,
Alamandrax, Aleph-4, Alex Ex, Alex43223, Alfie66, Amatulic, Amorgen, Andeggs, Andres, Andy1703, AndyBQ, Antimatter15, Applepedia, Arkaru, Arslion, Art110week12, Ary29,
Askad.skynet, Atularunpandey, Avneetmangat, Axcordion, AxelBoldt, Ayushbhandari, Badon, Bambolea, BanzaiSi, Barkeep, Barley55, Bartek.okonek, Bashari, Behnam, Beland, Bender235,
Bernie Kohl, Betacommand, Bigbluefish, BitterMan, Bloodball, Bluerasberry, Bodnotbod, Bongomatic, Bons, Borb, Brenont, Canberra User, Captain-tucker, CaptainLlama, Cavinsmith,
Cbstogner, Cburnett, Cemulku, Chadernook, Charles Gaudette, Chensiyuan, Chocotito, Chris Capoccia, Chris857, ChrisCork, Ckatz, Cluedo, Cmprince, CommonsDelinker, CoolFox, Cspan64,
CuriousJM, CypherXero, DGG, DGaw, DaProx, Dannorcott, Dany 123, Darxus, Davidacc, Ddxc, Deanpemberton, Deschenays, Dicklyon, Digiphotouk, Diliff, Discospinster, Docmgmt,
DoctorElmo, Domingo2001, DonIncognito, Donarreiskoffer, Dp67, DreamGuy, Drsayis2, Dtrebbien, EJF, EagerToddler39, Eb24, Econt, EdwardFawkes, Ehn, Ekabhishek, EmreDuran, Emreksd,
Enochlau, Epbr123, EvanED, EverGreg, FDRTools, Famouslongago, Fbanty, Feraudyh, Ferozeea, Fir0002, Fitchhollister, Forsh, Frankie0607, Frappyjohn, Frecklefoot, Galark, Galmicmi,
GandalfDaGraay, GarconN2, Garrett Albright, Gaschurman, Gayscout, Gdavidp, George100, Gewehr223, Giftlite, Gilyazov, Gmaxwell, Goa103, Gphoto, GregorB, GregorDS, Groogle,
Gryzzly92, Gulyshen, Gump Stump, Guthrie, Guybrown, Gwern, H, HDRPhotoman, HaroldatState, Hasanq, Heliac, Helix84, HiraV, Hu, Hu12, Hydrargyrum, Ian Fortuno, Icarusgeek, Imroy,
Indon, Innov8or, Invenio, Iricigor, IvorE, Jahilia, Jan Hofmann, Jauburn, Jdefoore, Jearle, JeffreyLiu-NJITWILL, Jellocube, Jerryobject, Jim McKeeth, Jim.henderson, Jimkack, Jksolomon,
Jmiles1, Jmrowland, Joe Decker, Johnicon, JojoMojo, Jonathan Stanley, JorisvS, Jpatokal, Jpwbamber, Jsluoma, Julien29, Jurohi, JustinHall, KC9AIC, KJKingJ, KaiUwe, Karam.Anthony.K,
KennanLau, Kinema, Kmccoy, Knobunc, KnowledgeOfSelf, Komap, Kozuch, Krawczyk, Kyng, La Pianista, Lamoid, Laserion, Latebird, Leandrod, Leebert, Leedeth, Leonard G., Lightmouse,
LilHelpa, Lord Waqaas, Lugia2453, Lwliam, M, M1ss1ontomars2k4, MER-C, Malachite36, Mangelo48, Marcelo1229, Marek69, Martarius, Mazin07, Meisam, Mfero, MichaelJanich,
Mikachu42, Mindmatrix, Misterx2000, Mm40, Moojoe, Morio, Mr Minchin, MrMarmite, MrX, Msiebuhr, Mszklanny, Mtoddy, Mulletsrokkify, Musicanselr, Nagualdesign, Nattfodd, Nbarth,
Needlenose, Neptune8907, Nevil0, Nevit, Nick Number, Nickomargolies, Nikevich, Niky cz, Njaelkies Lea, NoSoftwarePatents, Noclip, North8000, Noso1, Nrolff72, Ntwwoit, ONEder Boy,
Octane, Ohiostandard, Onorem, Ortolan88, Ost316, Ozhiker, PS2pcGAMER, Pearle, Peej23, Pepo13, Pepve, Pestling, Peter Campbell, PeterFisk, Philipbailey2010, Philwiki, Phoenixdolphin,
Photogold, Photon81, Phrood, PimRijkee, Pmanderson, Pokipsy76, Protohiro, Psd, Qkowlew, Qutezuce, R'n'B, Rafm, Ragesoss, Randyoo, Raveonpraghga, Reisio, Ren Kusack, Reverie98,
Rfscott, Rgranzoti, Richie, Rjwilmsi, Rmac, Robert K S, Rocket Laser Man, Rockynook, Rodii, Rogerdpack, Roodi2000, Rror, Rubiomik, Sapphirecut, SarekOfVulcan, Sergeramelli, Serouj,
Signalhead, Simetrical, Simplymono, Siotha, Skatebiker, SkyWalker, Snek01, Soler97, Soulkeeper, Spear, Spellmaster, Squids and Chips, Ssilk, Stephenbrasil, Sterrys, SteveMcCluskey,
Stevenbjerke, Super-Magician, Supreme Deliciousness, Suruena, Sven Boisen, Swarve, Swilsonmc, T-tus, Tchavalas, Tevildo, Tevonic, ThaddeusB, The Thing That Should Not Be, TheArvie,
TheGerm, TheMindsEye, TheToastster, Thelastminute, Theshadow27, Thumperward, Tijfo098, Titoxd, Tkgd2007, Tobias Schmidbauer, Tom87020, Tomlee2010, Tony1, Toytoy, Turian,
TutterMouse, Ugen64, Upshot, Utpress, Valiaikainen, Vance&lance, Verne Equinox, Walden, Walshga, Webadisi, Websterwebfoot, Wedesoft, Who What Where Nguyen Why, Wikinaut,
WilliamSommerwerck, WookieInHeat, Xfrank, Yahya Abdal-Aziz, Yug, Yvolution, Z.E.R.O., ZS, Zarex, Zero0w, Zimberoff, , , 700 anonymous edits
Contre-jour Source: http://en.wikipedia.org/w/index.php?oldid=563368547 Contributors: 16@r, 83d40m, Al Fecund, Alvesgaspar, Amandajm, Arpingstone, Beyond silence, Bjdehut, Brookie,
Cburnett, Centrx, Czar, Encyclopedist, Etan J. Tal, FHy, Heron, Hooperbloob, Howcheng, Imroy, Ioscius, Jjron, John254, KFP, Kappa, Kfasimpaur, Knuckles, Lemon-s, Leonardo Boiko,
Lukeroberts, Mactographer, Mercy11, Mheger, MisfitToys, Nbarth, Nono64, Ppntori, Qfl247, Ralf Roletschek, Redquark, Robert Weemeyer, Robofish, SKPhoton, Shyam, SlaveToTheWage,
Srtxg, Stephen C. Carlson, Tem42, TenOfAllTrades, Tomer T, Vikreykja, Xaque, Zundark, 12 anonymous edits
Night photography Source: http://en.wikipedia.org/w/index.php?oldid=567052914 Contributors: Adam.J.W.C., Advanced, Aitias, AndyKamy, ArinAhnell, Atorero, Beland, Ben pcc,
Betacommand, Bidiot, BigHaz, Blanche456, Chekhov 2, Czenkaj, Czolgolz, Dazecoop, Dekisugi, El aprendelenguas, Ericd, Finetooth, Fuhghettaboutit, Goncalo Lemos, Gordonov, Gsarwa,
Henrik A., Hermione9753, Imroy, Jacopo188, Joeywallace9, Johnmperry, Kprateek88, Kungfuadam, Lahiru k, Levangel, LogiJake, MPF, Miklbarton, Mouse Nightshirt, Mstahl, Nbarth,
Noctographer, Onebravemonkey, PKT, PeteDaines, Picturetokyo, Radagast83, RafiKoria, Raysonho, Rebrane, RexNL, Rfkphoto, Rhobite, Rich Farmbrough, Rjwilmsi, Roei.tabak, Rreini,
Rudaeva, Sannse, Sardaka, Scheibenzahl, Shaddack, Skyephoto, Srleffler, SteinbDJ, Steve Harper, Steve Harper Photographer, SteveHopson, TheMindsEye, Thelordofthemanor, Theviewlv,
Thisandthem, Triddle, Troy Paiva, Twinsday, Tyqunsamuel, Velella, VernoWhitney, Versageek, W.M.DeJardine, Wangi, WeeSKI, Wwagner, Xalan mustafa, Xx409xx, Yerpo, Zabrinatipton,
, 178 anonymous edits
Multiple exposure Source: http://en.wikipedia.org/w/index.php?oldid=551697225 Contributors: Ajuk, AndyHe829, Animaldetector, Bambolea, Calrion, Celuici, Chris Ducat, Ckatz,
Ctruongngoc, DaveWF, Deltabeignet, Dobs, Ewawer, Fruit.Smoothie, Glogger, Hooperbloob, JForget, Jacopo188, Jafeluv, Janendra, Jason Quinn, Jhf44, Jkl, John, Leon7, Lerdsuwa, Light
current, Little green rosetta, Mactographer, MarkusHagenlocher, Matthiaspaul, Mboverload, Morio, Mr.Critic123, Noso1, O'Dea, Ovis23, Paulstgeorge, Pepo13, Phiarc, QuentinUK, Quibik,
Radagast83, Ren Kusack, Slaenterprises, Srittau, Statsone, Sven Boisen, Tannenmann, The Phoenix, TheMindsEye, WikiPedant, 48 anonymous edits
Camera obscura Source: http://en.wikipedia.org/w/index.php?oldid=564476919 Contributors: 2fort5r, 7efty, Aamackie, Agreene175, Ahoerstemeier, Akerans, Alansohn, Alpha 4615,
Amikake3, Ancheta Wis, Andre Engels, AndrewCStuart, Arab Hafez, Armbrust, Art LaPella, Auricfuzz, Auronrenouille, Avoided, Barek, Barneca, Baseball Bugs, Benjaminpender, Bluerasberry,
Boing! said Zebedee, Bookofjude, Boomshanka, Brain seltzer, Brandon, Breno, Brian Crawford, Brion VIBBER, Broehm, BrokenSphere, Bryan Derksen, Cbaer, Cburnett, Charles Matthews,
ChrisGualtieri, Chuunen Baka, Chzz, Clicketyclack, Coasterlover1994, Code6840, Communication ccl, Conscious, Craig Butz, DVD R W, Darrellk, Dave Muscato, DavidParfitt, Dee Fraser,
Deor, Dicklyon, DirkvdM, Discospinster, Dmmaus, Dontaskme, Doug butler, Downtownee, Dra3b, Dronne, Durova, E2eamon, ELApro, Eagleridge, Edal, Eklotzko, Eliyak, Ellin Beltz, Epbr123,
Epolk, Erdesky, Ericd, Erik9, Falcon8765, Ferrarama, FiggyBee, Flamesplash, Flaminghomeryto, Fleminra, Fluffystar, Fork me, Freshacconci, Froid, Fumitol, Funandtrvl, Furrykef,
Futonrevolutionary, Geke, Glacialfox, Gnidan, Graham87, GrayFullbuster, Harpsichord246, Heron, Hexrei2, Hhhippo, Hmains, Hoary, Hu12, Hushus20, IMSoP, Iain4724, Iamhove, Igoldste,
Imroy, InverseHypercube, Ioerror, Iridescent, Is anything not in use, Ixfd64, J8079s, JPDW, Jagged 85, JamesBWatson, Janneok, Jennavecia, Jim Casper, Jimmybob123, JohnClarknew, Johnbod,
Joshua6107, Jpbowen, Julzzz, Kent Wang, Kevo00, Killer4571, Knight1993, KnightRider, Koven.rm, Krisrich, Kurt Eichenberger, Kusma, Lacomba, Lars Washington, Latka, LeaveSleaves,
Lensicon, Liftarn, LindsayH, LonelyMarble, Lugia2453, LuoShengli, MPerel, Malcolm Farmer, Mandarax, Maria1853, MariaCA, Mav, Meggar, Mencial, Merlinme, Mhockey, Michael Hardy,
Michaelkirschner, Mike Dillon, Mike Rosoft, Mild Bill Hiccup, Mindmatrix, Minna Sora no Shita, Misterjta, Miyagaya, Mmarkon, Modernist, Mononomic, Mostly water, MrOllie, Natvh4,
NawlinWiki, Neilbeach, Nekura, NigelLumsden, Nixeagle, NoelWalley, Norwikian, NuclearWarfare, Nuno Tavares, OOODDD, Obscurantist, Omnipaedista, Ortolan88, Oxymoron83, Paddles,
Paleo-camera, Paul1776, PericlesofAthens, Philip Trueman, Piano non troppo, Piledhigheranddeeper, Pinethicket, Player017, Poetaris, Polylerus, President Rhapsody, Pseudomonas, Punctured
Bicycle, R N Talley, RL0919, RPSM, Renata, Robby.is.on, Rocastelo, Rococo1700, Rodw, Rookkey, Rouis.k, Rrjr0306, Rtucker913, Sam Clark, Saraplacid, Schiec, Scwlong, Seth Ilys, Shaul1,
Shell Kinney, Shibes, SilkTork, Sina7, Smokeyfire, Snoyes, Sohale, Spacepotato, Sparkit, Spitfire19, Srleffler, Stan Shebs, SteveBaker, Stevertigo, Stigmj, TBloemink, Tarquin, Tedernst, The
Bearded Man, The Thing That Should Not Be, The Yowser, TheRingess, Thedjatclubrock, Thorwald, Thuen, Tiddly Tom, Tktktk, Tobby72, Tomasz Prochownik, Tony Corsini, Topory, Totally
screwed, Tow, Ukexpat, Ulric1313, Unrealwriter, Valfontis, Vanderdecken, Verbalcontract, Violetriga, Visulate, Wayne Slam, Wikievil666, Willhsmit, William M. Connolley, Williaz13,
Wolfrock, Wxyz098, Xmastree, Yemal, Zosodada, Zundark, , 493 anonymous edits
Pinhole camera Source: http://en.wikipedia.org/w/index.php?oldid=566370761 Contributors: 09swild, 14chongce1, 842U, Ada shen, AddressOk, Alan Liefting, Aljays, AllanX, Alphachimp,
Alsaker, AlternativePhotography, Amatulic, Amble, Anna Lincoln, Apokryltaros, Arniep, Art Carlson, Art LaPella, Artlondon, Arturobandini, Astaykov2, AwamerT, Baffle gab1978, Bansp,
Barek, Barneca, BenFrantzDale, BigPimpinBrah, Bjdehut, Bobblewik, Bobo192, Bongwarrior, Booksworm, BorgQueen, Braincricket, Brandon5485, Brion VIBBER, Buybooks Marius, C.uzum,
Can't sleep, clown will eat me, Canadian-Bacon, Capricorn42, Carcharoth, Cbaer, Cchriste, CharlesHBennett, ChrisHodgesUK, Christopherlin, Ciphers, Cjs2111, Cmglee, Cnetra,
CommonsDelinker, CompactFish, Crazysane, Cremepuff222, Cskirksey, Cst17, DVdm, Dancter, Danner578, Danski14, Dav2008, Dave souza, Davewho2, David Martland, Davidgutschick,
Deltabeignet, Deor, DerHexer, Dicklyon, Discospinster, Dobs, Dojo 19, Douglas Whitaker, DrBob, Ds13, Duchamp, Egil, Eivind F yangen, ElKevbo, Elizgoiri, Epbr123, Epitalon, Erianna,
Ericd, Erik Kennedy, EscapingLife, EvelinaB, Ewan McGregor, Excirial, Falcon8765, Firestarter001, Fourohfour, Frango com Nata, Freakofnurture, Freshacconci, Gadfium, Gail, Geni, Giftlab,
Gilliam, Gingersnaps02689, Glane23, Glenn, Googleplex12, Gpeterw, Guanaco55, H005, Habib Muradov, Hambergerhead, HamburgerRadio, Heimstern, Hooperbloob, Horkana, ITEsafety,
Imnotminkus, Imroy, Isarra (HG), J.delanoy, J8079s, JJ Harrison, Jadedoto, Jagged 85, JohnClarknew, Joseph Solis in Australia, KYN, Kate, Kazikameuk, Kencf0618, Khateeb88, Klilidiplomus,
Kristoffersson, Kurt Eichenberger, Kwamikagami, LOL, Lacomba, Larry laptop, Leafyplant, Mactographer, Mandarax, MarcM1098, MarkSweep, Martarius, Martijn Witlox, Materialscientist,
Mato, Mcyccc, Merovingian, Mindmatrix, Minna Sora no Shita, Mmarkon, Mooglemoogle, Mpetrizzo, Mr Stephen, Mr. Stradivarius, MrBenn, MrOllie, Mstahl, Muua3, Mysidia, NawlinWiki,
Nbreslin, Nicolai g, Nonexistant User, NoodleWhacks, Numbermaniac, Nyttend, Olivier, Osman09, Oxymoron83, Paddles, Pawyilee, Pbroks13, Peppergrower, PericlesofAthens, Petero9, Phiarc,
Phil Gee, Philthecow, Pieter Kuiper, PlanetStar, Poetaris, Professorjohnas, Pscott558, Pustelnik, Pyrospirit, Qqqee, R'n'B, Raghav96, RainbowOfLight, Ratherbeinsane, Rc3784, Reach Out to the
Truth, Reatlas, RedWolf, Redbobblehat, Rettetast, RexNL, Rich Farmbrough, Rjwilmsi, Rostayob, Rror, Sam Hocevar, Sam Korn, Sanbeg, Sapata, Sardanaphalus, Schwallex, Sean William,
Secret (renamed), Sg227, ShakingSpirit, Shanes, Shanmugamp7, Sho Uemura, SimonD, SkerHawx, Slakr, Smurfoid, Snodfrey, Socrates2008, SonicAD, Spacepotato, SpellChecker, Srleffler,
Stephenb, Strice, Stroumphette, Sulfis, Sven Korsgaard, Syncategoremata, The Quirky Kitty, The Thing That Should Not Be, Theda, Thegreenj, Themaskobscure, Thingg, Three, Tinss, Tobias
Bergemann, Toghome, Triona, Tweek, Txomin, Tyrenius, Van der Hoorn, Velella, Very little gravitas indeed, Violinist67, Vrenator, Wandering Ghost, Webclient101, Wik, WikiDao, Wile E.
Article Sources and Contributors
61
Heresiarch, William Avery, Wimt, Wow.its.alana, Wxyz098, Xcentaur, YSH123, Zanhsieh, Zetlinfiend, Zolac1, Zolac69, Zundark, , 672 anonymous edits
Stereoscopy Source: http://en.wikipedia.org/w/index.php?oldid=568013898 Contributors: (ACB) 3-D, 3dnatureguy, 3dreview, 7, 7&6=thirteen, A876, AED, AVarchaeologist, Aaron Walden,
AccuTrans, Adjusting, Ahookdotnet, Akbana ee, Al.locke, Alan Parmenter, AlatarK, Alekjds, Alexander.stohr, Alexcliftontrio, Alexf, Alexone234, Algae, Alpalfour, Alphathon, Altes2009,
Americasroof, Amicon, Anarkigr, Anaxial, Anditsonfire, Arcadia616, Aretyoods, Argotechnica, ArnoldReinhold, Arteitle, Asteinberg, Astrochemist, Astrophil, Aubilenon, Auntof6, Avenged
Eightfold, Averagebloke, Awoods3d, AxelBoldt, BanyanTree, Bathflat8, Bci2, Beetstra, BenFrantzDale, Bender235, Benoitmichel, Berkant atay, Bilby, Bittoe, BjKa, BlackSharkfr, Bradshaws1,
BrainStain, Bronson raleigh, Burgundavia, CRabe, Calton, Caltrop, Cancerbero 8, CanisRufus, CannibalSmith, Capitan Trueno, Carbo3d, Carlatkmtkm, Carn, CarolSpears, Celuici, Cfrunyon,
Charvex, Chowbok, Chris Howard, Chris the Paleontologist, Chris the speller, Christopher Sajdak, ClaretAsh, Cmglee, CobraWiki, Collect, Commonlingua, CommonsDelinker, Corvus, Craig
Butz, Crisco 1492, Cst17, Ctachme, Cumplaywithmikehawk, Cyberscience3d, DCFan101, DMG413, DWaterson, DXBari, Dan100, Dancter, Danh, Daniel C, Daniel Mietchen, Darkpool,
Darrenhusted, Dave3457, Davepape, DavidCary, Davidhorman, Davidwkuntz, Dawnseeker2000, Dazp1970, Dcerisano, DeFacto, Deelkar, Derekleungtszhei, Diamondland, Digitalrevolution3d,
Dinoj, DirectorG, Discospinster, Don't give an Ameriflag, Dovo3d, Dreammaker182, Drewmutt, DropDeadGorgias, Dthomsen8, Durova, Ed g2s, Ego White Tray, Ehurtley, EivindJ, El C, El pak,
EncMstr, Enterfrize, EoGuy, Epbr123, Evil saltine, Falcon8765, Firien, Firsfron, Fluffystar, Flyers77, Forresto, Fplay, Fraggle81, Fred Hsu, Fredrik, Friedrich.Schick, Furrykef, G.Petrin, Gaius
Cornelius, Galidakis, Gandalf's hat, Garvin58, Geckoiw, GenaRos, Geof, Gerbrant, Gert7, Ggrm, Gnewf, Gogo Dodo, GoingBatty, Gr8white, GreenGambler, Gsarwa, Guccidad, Gunter, Gutza,
Gvitaly, H.Cesar Rubio, Healthcareexp, Hephaestos, Heron, Hooperbloob, Houseofsutton, Hsinghsarao, Hu, Hu12, Hugarh, Hyacinth, INFITEC-Dualcolor3D, IVAN3MAN, Ida Shaw, Ihope127,
IlPasseggero, Image3D, Infrogmation, Inition, J04n, JHunterJ, JMS Old Al, JMyrleFuller, JTN, JacobDeight, JamesHoadley, Jamieson630, Jan Arkesteijn, JaneBernaghner, Janke, Japo, Jarble,
Jclerman, Jdavis9000, JeffJonez, Jengod, Jeysaba, Jimius, John Elson, JohnCD, Johnmorgan, Johnuniq, Jozue, Jrockley, Jtalledo, KJK::Hyperion, KLLvr283, Keilana, Khatru2, Khazar2,
KimberTN, Kinu, Klparrot, KnowBuddy, Kocio, KoshVorlon, Kotarou3, Koven.rm, Kshnbd, Kyng, Lambyte, Leonard G., LeonardoGregianin, Leszek Jaczuk, LilHelpa, Linnell, LionShare,
Looxix, Lordofallkobuns, Lotje, Lskil09, LucasVB, Lumos3, Luneraako, Lupin, M-le-mot-dit, MER-C, MStraw, MWielage, Mac, Macedonian, Magnus Manske, Mandarax, Manop, Marcel
Dublin, Mark Forest, Martarius, Martyman, Mattgirling, Mcapdevila, Mdann52, Michael Hardy, Michael L. Kaufman, Minecrafted, Miss A DK, Mnmngb, Montymintypie,
Mortenoesterlundjoergensen, Motorman45, Mr Stephen, MrAdventur3, MrOllie, Mumiemonstret, Myotis, Mzamora2, N.s.holliman, N3vln, Nativeborncal, NawlinWiki, Nesnad,
Nicoladegiovanni, Ninjagecko, Niteowlneils, Nono64, Nopetro, Normandisteele, Novangelis, Obsidian Soul, Ocoles, Octavian history, Olivier.amato, Optimist on the run, Paddu, Paranoid,
Pastordavid, Patrick, Paulnasca, Pearle, Pengo, Penniedreadful, Peter S., Petersam, Photo-3d, Photoarts, PixelPartner, Pjacobi, Popsracer, Porsche997SBS, Private meta, Prolog, Psoreilly,
Quantumobserver, Queenvictoria, Quig, R'n'B, RJASE1, RW Marloe, Rachel.howard, Radagast83, RadioFan, RadkaLG, Rama, Ravusvita, Recury, Reddi, Reellis67, Reify-tech, Reinoutr,
Rekamen, Rems 75, RexLion, Richie, Rixs, Rjwilmsi, Rmallins, Robert P. O'Shea, RodC, Rodrigob, Rofthorax, Roshniashar, Royalbroil, Rrburke, Rsduhamel, Rubberdude2010, SYPHA2001,
Salamurai, Sam Hocevar, Sam Korn, Sapata, Savie Kumara, Sbwoodside, Seishirou Sakurazuka, Sergeyy, Setanta747, Sidepocket, Sietse Snel, Signalhead, Silas10961, Sin-man, SiriusB, Sladen,
Slashme, Sluisga, Soler97, Solopiel, Some standardized rigour, SomeFreakOnTheInternet, Space station genie, Sprocket19, Srich32977, Srleffler, Staticshakedown, StephanCom, Stephen Day,
SteveHopson, Sun Creator, Superpika66, Tabledhote, Tangstra, Tdimhcs, TechPurism, TexasDex, Th1rt3en, TheMindsEye, Thincat, TiagoTiago, Tide rolls, Tocharianne, Toussaint, Toybuilder,
Tracergraphix, TravisNygard, Tregoweth, Trevj, Treygdor, Tridakt, Tridi, Tristan Horn, Trivialist, Trusilver, Ulflund, Unclepea, Unknown W. Brackets, User A1, Uvaphdman, Vailads, Vanished
user uih38riiw4hjlsd, Vasily-Vasily, Vaynamar, Verne Equinox, Vilerage, ViperSnake151, Vkil, Voidxor, Volkan Y, Volkan Yuksel, Vossman, Vrenator, Vybr8, WVhybrid, Wagino 20100516,
Wayne Gloege, Webmoof, Who, Wiaijk, Wik, Wiki-user-lsw, Wiki13, WikiSlasher, Wikibarista, Wikipedian2, Willsiv, Windward1, Wintonian, Wittkowsky, Wm, Woohookitty, Wuhwuzdat,
X-Fi6, Xaa, Xavier Gir, Xenonice, Xenophon777, Xnn, Xorx77, Xyoureyes, Zanimum, Zigger, Zolikk, , 588 anonymous edits
Image Sources, Licenses and Contributors
62
Image Sources, Licenses and Contributors
File:ZoneSystem-Gradient.png Source: http://en.wikipedia.org/w/index.php?title=File:ZoneSystem-Gradient.png License: GNU Free Documentation License Contributors: Original uploader
was Imroy at en.wikipedia
File:ZoneSystem-Gradient-lines.png Source: http://en.wikipedia.org/w/index.php?title=File:ZoneSystem-Gradient-lines.png License: GNU Free Documentation License Contributors:
Original uploader was Imroy at en.wikipedia
File:Redscale.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Redscale.jpg License: Public Domain Contributors: SirJello37
File:BrnoSunsetHDRExampleByIgor.jpg Source: http://en.wikipedia.org/w/index.php?title=File:BrnoSunsetHDRExampleByIgor.jpg License: Creative Commons Attribution-Share Alike
Contributors: Igor Iric, Igor Iri, ,
File:Leuk01.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Leuk01.jpg License: Creative Commons Attribution 2.0 Contributors: Wolfgang Staudt from Saarbruecken, Germany
File:HDR image + 3 source pictures (Cerro Tronador, Argentina).jpg Source:
http://en.wikipedia.org/w/index.php?title=File:HDR_image_+_3_source_pictures_(Cerro_Tronador,_Argentina).jpg License: Creative Commons Attribution-Sharealike 3.0 Contributors:
Mariano Szklanny
Image:New York City at night HDR edit1.jpg Source: http://en.wikipedia.org/w/index.php?title=File:New_York_City_at_night_HDR_edit1.jpg License: Creative Commons
Attribution-Sharealike 2.0 Contributors: Paulo Barcellos Jr.
File:LeGray brick.jpg Source: http://en.wikipedia.org/w/index.php?title=File:LeGray_brick.jpg License: Public Domain Contributors: Jarekt, Yann
File:Nuvola apps kview.svg Source: http://en.wikipedia.org/w/index.php?title=File:Nuvola_apps_kview.svg License: unknown Contributors: Ch1902, Saibo
file:Searchtool.svg Source: http://en.wikipedia.org/w/index.php?title=File:Searchtool.svg License: GNU Lesser General Public License Contributors: Anomie
File:Wyckoff HDR Curve.tif Source: http://en.wikipedia.org/w/index.php?title=File:Wyckoff_HDR_Curve.tif License: unknown Contributors: SteveMcCluskey
Image:StLouisArchMultExpEV-4.72.JPG Source: http://en.wikipedia.org/w/index.php?title=File:StLouisArchMultExpEV-4.72.JPG License: Creative Commons Attribution-Sharealike 3.0
Contributors: Kevin McCoy
Image:StLouisArchMultExpEV-1.82.JPG Source: http://en.wikipedia.org/w/index.php?title=File:StLouisArchMultExpEV-1.82.JPG License: Creative Commons Attribution-Sharealike 3.0
Contributors: Kevin McCoy
Image:StLouisArchMultExpEV+1.51.JPG Source: http://en.wikipedia.org/w/index.php?title=File:StLouisArchMultExpEV+1.51.JPG License: Creative Commons Attribution-Sharealike 3.0
Contributors: Kevin McCoy
Image:StLouisArchMultExpEV+4.09.JPG Source: http://en.wikipedia.org/w/index.php?title=File:StLouisArchMultExpEV+4.09.JPG License: Creative Commons Attribution-Sharealike 3.0
Contributors: Kevin McCoy
File:StLouisArchMultExpCDR.jpg Source: http://en.wikipedia.org/w/index.php?title=File:StLouisArchMultExpCDR.jpg License: Creative Commons Attribution-Sharealike 3.0
Contributors: StLouisArchMultExpEV-4.72.JPG: Kevin McCoy StLouisArchMultExpEV-1.82.JPG: Kevin McCoy StLouisArchMultExpEV+1.51.JPG: Kevin McCoy
StLouisArchMultExpEV+4.09.JPG: Kevin McCoy derivative work: Darxus (talk)
File:StLouisArchMultExpToneMapped.jpg Source: http://en.wikipedia.org/w/index.php?title=File:StLouisArchMultExpToneMapped.jpg License: Creative Commons Attribution-Sharealike
3.0 Contributors: StLouisArchMultExpEV-4.72.JPG: Kevin McCoy StLouisArchMultExpEV-1.82.JPG: Kevin McCoy StLouisArchMultExpEV+1.51.JPG: Kevin McCoy
StLouisArchMultExpEV+4.09.JPG: Kevin McCoy derivative work: Darxus (talk)
Image:The Sound of Silence +2EV.jpg Source: http://en.wikipedia.org/w/index.php?title=File:The_Sound_of_Silence_+2EV.jpg License: Creative Commons Attribution-Sharealike 3.0
Contributors: User:Neptune8907
Image:The Sound of Silence -2EV.jpg Source: http://en.wikipedia.org/w/index.php?title=File:The_Sound_of_Silence_-2EV.jpg License: Creative Commons Attribution-Sharealike 3.0
Contributors: User:Neptune8907
File:The Sound of Silence Resulting HDR.jpg Source: http://en.wikipedia.org/w/index.php?title=File:The_Sound_of_Silence_Resulting_HDR.jpg License: Creative Commons
Attribution-Sharealike 3.0 Contributors: User:Neptune8907
Image:Al Qasba.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Al_Qasba.jpg License: Creative Commons Attribution-Sharealike 3.0 Contributors: User:Ferozeea
Image:Trencin hdr 001.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Trencin_hdr_001.jpg License: Creative Commons Attribution 3.0 Contributors: Abphoto
Image:Charles River 2 (Pear Biter).jpg Source: http://en.wikipedia.org/w/index.php?title=File:Charles_River_2_(Pear_Biter).jpg License: Creative Commons Attribution-Sharealike 2.0
Contributors: Eric Hill from Boston, MA, USA
Image:CoucherDeSoleilLaGarde.jpg Source: http://en.wikipedia.org/w/index.php?title=File:CoucherDeSoleilLaGarde.jpg License: Creative Commons Attribution-Sharealike 3.0,2.5,2.0,1.0
Contributors: Philippe Cosentino
Image:DusseldorfOldPort.jpg Source: http://en.wikipedia.org/w/index.php?title=File:DusseldorfOldPort.jpg License: Creative Commons Attribution 2.0 Contributors: Jo Schmaltz from
Hongkong, Hongkong
Image:Goztepe Park 06773.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Goztepe_Park_06773.jpg License: GNU Free Documentation License Contributors: Nevit Dilmen
Image:Hackerbrcke Sonnenuntergang.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Hackerbrcke_Sonnenuntergang.jpg License: Creative Commons Attribution-Sharealike
3.0 Contributors: Richard Huber
Image:Grand Hotel, Lund-1.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Grand_Hotel,_Lund-1.jpg License: Public Domain Contributors: ComputerHotline, Darwinius,
Dcastor, H005, Mattes
Image:Kobrin suvoros st..jpg Source: http://en.wikipedia.org/w/index.php?title=File:Kobrin_suvoros_st..jpg License: Creative Commons Attribution-Sharealike 3.0 Contributors:
User:Askad.skynet
Image:Skadovsk sunset.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Skadovsk_sunset.jpg License: Creative Commons Attribution-Sharealike 3.0 Contributors:
User:Askad.skynet
Image:HDR The sound of silence (The road to Kamakhya).jpg Source: http://en.wikipedia.org/w/index.php?title=File:HDR_The_sound_of_silence_(The_road_to_Kamakhya).jpg License:
Creative Commons Attribution-Sharealike 3.0 Contributors: User:Neptune8907
Image:View from the Dandenongs in winter.jpg Source: http://en.wikipedia.org/w/index.php?title=File:View_from_the_Dandenongs_in_winter.jpg License: Creative Commons
Attribution-Sharealike 3.0 Contributors: User:Peter Campbell
file:Contre jour, Queenscliff, jjron, 05.12.2009.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Contre_jour,_Queenscliff,_jjron,_05.12.2009.jpg License: GNU Free Documentation
License Contributors: jjron
file:The Photographer.jpg Source: http://en.wikipedia.org/w/index.php?title=File:The_Photographer.jpg License: Creative Commons Attribution-ShareAlike 3.0 Unported Contributors:
Joaquim Alves Gaspar
File:Night Photography.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Night_Photography.jpg License: Creative Commons Attribution-Sharealike 3.0 Contributors:
www.modernartphotograph.com Portland Photographer Robert Knapp
File:Singapore Skyline Raffles Place.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Singapore_Skyline_Raffles_Place.jpg License: Creative Commons Attribution-ShareAlike 3.0
Unported Contributors: Image by Calvin Teo (User:Advanced)
File:Top of Jebel Hafeet.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Top_of_Jebel_Hafeet.jpg License: Creative Commons Attribution-Sharealike 3.0 Contributors: User:Xalan
mustafa
File:Nyc10795u.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Nyc10795u.jpg License: Public Domain Contributors: Berrucomons, Jmabel, Simonizer, Thierry Caro,
Trialsanderrors
File:Chay kenar - Tabriz.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Chay_kenar_-_Tabriz.jpg License: Creative Commons Attribution-Sharealike 3.0 Contributors:
User:Jacopo188
File:Subhash Marg in Indore, India.JPG Source: http://en.wikipedia.org/w/index.php?title=File:Subhash_Marg_in_Indore,_India.JPG License: Creative Commons Attribution-ShareAlike 3.0
Unported Contributors: Kprateek88
Image Sources, Licenses and Contributors
63
File:Sydney Opera House - Dec 2008.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Sydney_Opera_House_-_Dec_2008.jpg License: Creative Commons Attribution-Sharealike
3.0 Contributors: Diliff
Image:Bay Bridge at night by Mikl Barton.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Bay_Bridge_at_night_by_Mikl_Barton.jpg License: Creative Commons Attribution 3.0
Contributors: Mikl Barton
Image:Picture 121.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Picture_121.jpg License: Creative Commons Attribution-ShareAlike 3.0 Unported Contributors: Adrignola,
Look2See1, Morio, Picturetokyo, Ronaldino, Vantey
Image:Example of night photography at The Garden of Five Senses, New Delhi.JPG Source:
http://en.wikipedia.org/w/index.php?title=File:Example_of_night_photography_at_The_Garden_of_Five_Senses,_New_Delhi.JPG License: Creative Commons Attribution-ShareAlike 3.0
Unported Contributors: Kprateek88
Image:Carnival wikipedia.JPG Source: http://en.wikipedia.org/w/index.php?title=File:Carnival_wikipedia.JPG License: GNU Free Documentation License Contributors: Theviewlv, 2
anonymous edits
Image:WashingtonParkBlossomingTree.jpg Source: http://en.wikipedia.org/w/index.php?title=File:WashingtonParkBlossomingTree.jpg License: Public Domain Contributors: Self.
Image:WTC Twin Towers Night July 2001.jpg Source: http://en.wikipedia.org/w/index.php?title=File:WTC_Twin_Towers_Night_July_2001.jpg License: Creative Commons
Attribution-Sharealike 2.0 Contributors: Filipe Fortes from New York, United States
File:Ggb by night.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Ggb_by_night.jpg License: GNU Free Documentation License Contributors: Image taken by Daniel Schwen
Image:Taipei 101 at night.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Taipei_101_at_night.jpg License: GNU Free Documentation License Contributors: Meow
Image:STS-109 launch.jpg Source: http://en.wikipedia.org/w/index.php?title=File:STS-109_launch.jpg License: Public Domain Contributors: NASA
Image:TorontoHTOPark5.jpg Source: http://en.wikipedia.org/w/index.php?title=File:TorontoHTOPark5.jpg License: Public Domain Contributors: Raysonho @ Open Grid Scheduler / Grid
Engine
File:HMAS Onslow-.jpg Source: http://en.wikipedia.org/w/index.php?title=File:HMAS_Onslow-.jpg License: Creative Commons Attribution-Sharealike 2.5 Contributors: Adam.J.W.C.
Image:(1) UNSW entrance a2.JPG Source: http://en.wikipedia.org/w/index.php?title=File:(1)_UNSW_entrance_a2.JPG License: GNU Free Documentation License Contributors: Orestes654
(talk) 08:57, 3 December 2010 (UTC)
File:Copenhagen at night.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Copenhagen_at_night.jpg License: Creative Commons Attribution-Sharealike 3.0 Contributors:
Roei.tabak
Image:Lunar-eclipse-2004.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Lunar-eclipse-2004.jpg License: GNU Free Documentation License Contributors: Original uploader was
Mactographer at en.wikipedia (Original text : David Ball)
File:Karlheinz Stockhausen par Claude Truong-Ngoc 1980.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Karlheinz_Stockhausen_par_Claude_Truong-Ngoc_1980.jpg License:
Creative Commons Attribution-Sharealike 3.0 Contributors: User:Ctruongngoc
File:Neon Cowboy Sunset double exposure.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Neon_Cowboy_Sunset_double_exposure.jpg License: Creative Commons Attribution
2.0 Contributors: Cameron Russell from AUSTIN, TX, USA
File:Multiexposer of 7 person using adobe photoshop new.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Multiexposer_of_7_person_using_adobe_photoshop_new.jpg License:
Creative Commons Attribution-Sharealike 3.0 Contributors: Janendra
Image:UndigitalD2h.jpg Source: http://en.wikipedia.org/w/index.php?title=File:UndigitalD2h.jpg License: GNU Free Documentation License Contributors: Glogger, 1 anonymous edits
Image:Camera obscura box.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Camera_obscura_box.jpg License: GNU Free Documentation License Contributors: Che, Draceane,
Stigmj, Wutsje, Zhuyifei1999, 3 anonymous edits
File:Camerae-obscurae.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Camerae-obscurae.jpg License: Creative Commons Attribution-Sharealike 3.0 Contributors: Edal Anton
Lefterov
File:Camera obscura Prague.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Camera_obscura_Prague.jpg License: Creative Commons Attribution 3.0 Contributors: Gampe
Image:Camera obscura.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Camera_obscura.jpg License: Public Domain Contributors: Augiasstallputzer, Conscious, Foma39, Jodo,
Lars Washington, Ma-Lik, Maksim, 1 anonymous edits
File:Camera obscura2.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Camera_obscura2.jpg License: Public Domain Contributors: unknown, possibly Italian
File:Canaletto4fogli.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Canaletto4fogli.jpg License: Public Domain Contributors: clop
Image:CameraObscura.JPG Source: http://en.wikipedia.org/w/index.php?title=File:CameraObscura.JPG License: Public Domain Contributors: Seth_Ilys (talk) (Uploads). Original uploader
was Seth Ilys at en.wikipedia
Image:CameraObscuraSanFranciscoCliffHouse.jpg Source: http://en.wikipedia.org/w/index.php?title=File:CameraObscuraSanFranciscoCliffHouse.jpg License: GNU Free Documentation
License Contributors: Ioerror, Shyam
Image:Camara-obscura-image.JPG Source: http://en.wikipedia.org/w/index.php?title=File:Camara-obscura-image.JPG License: Public Domain Contributors: Code6840
Image:Camera Obscura.JPG Source: http://en.wikipedia.org/w/index.php?title=File:Camera_Obscura.JPG License: Creative Commons Attribution-Sharealike 3.0 Contributors: Saraplacid
Image:Camera_Obscura_box18thCentury.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Camera_Obscura_box18thCentury.jpg License: Public Domain Contributors: unknown
illustrator
Image:Modern Camera Obscura.JPG Source: http://en.wikipedia.org/w/index.php?title=File:Modern_Camera_Obscura.JPG License: Creative Commons Attribution-Sharealike 3.0
Contributors: User:The Bearded Man
Image:Camera Obscura in Use.JPG Source: http://en.wikipedia.org/w/index.php?title=File:Camera_Obscura_in_Use.JPG License: Creative Commons Attribution-Sharealike 3.0
Contributors: User:The Bearded Man
File:Pinhole-camera.svg Source: http://en.wikipedia.org/w/index.php?title=File:Pinhole-camera.svg License: Public Domain Contributors: en:User:DrBob (original); en:User:Pbroks13
(redraw)
File:IMG 1650 zonsverduistering Malta.JPG Source: http://en.wikipedia.org/w/index.php?title=File:IMG_1650_zonsverduistering_Malta.JPG License: GNU Free Documentation License
Contributors: User:Ellywa
File:PinholeCameraAndRelatedSupplies.jpg Source: http://en.wikipedia.org/w/index.php?title=File:PinholeCameraAndRelatedSupplies.jpg License: Creative Commons
Attribution-Sharealike 2.5 Contributors: Douglas Whitaker, Miquonranger03, 1 anonymous edits
File:Schiescharten als Lochkamera.JPG Source: http://en.wikipedia.org/w/index.php?title=File:Schiescharten_als_Lochkamera.JPG License: Creative Commons Attribution-Sharealike
3.0,2.5,2.0,1.0 Contributors: H005
File:LongExposurePinhole.jpg Source: http://en.wikipedia.org/w/index.php?title=File:LongExposurePinhole.jpg License: Creative Commons Attribution-Sharealike 3.0 Contributors: Ewan
McGregor
File:PinholeCameraImage.jpg Source: http://en.wikipedia.org/w/index.php?title=File:PinholeCameraImage.jpg License: Public Domain Contributors: Willspear1564
File:Pinhole Photographs.JPG Source: http://en.wikipedia.org/w/index.php?title=File:Pinhole_Photographs.JPG License: Public domain Contributors: 32bitmaschine, Conscious, JeremyA,
Maksim, Pierpao, 3 anonymous edits
File:PinholeCameraCloseup.jpg Source: http://en.wikipedia.org/w/index.php?title=File:PinholeCameraCloseup.jpg License: Creative Commons Attribution-Sharealike 2.5 Contributors:
Bkell, Douglas Whitaker, 2 anonymous edits
File:Pinhole hydrant neg pos.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Pinhole_hydrant_neg_pos.jpg License: Creative Commons Attribution-Sharealike 2.5 Contributors:
Matthew Clemente
File:Pocket stereoscope.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Pocket_stereoscope.jpg License: Creative Commons Attribution-ShareAlike 3.0 Unported Contributors:
Joaquim Alves Gaspar
File:Charles Street Mall, Boston Common, by Soule, John P., 1827-1904 3.jpg Source:
http://en.wikipedia.org/w/index.php?title=File:Charles_Street_Mall,_Boston_Common,_by_Soule,_John_P.,_1827-1904_3.jpg License: Public Domain Contributors: M2545
File:August Fuhrmann-Kaiserpanorama 1880.jpg Source: http://en.wikipedia.org/w/index.php?title=File:August_Fuhrmann-Kaiserpanorama_1880.jpg License: Public Domain
Contributors: unknown, Original uploader was Nieborak at pl.wikipedia
Image Sources, Licenses and Contributors
64
File:Company of ladies watching stereoscopic photographs by Jacob Spoel 1820-1868.jpg Source:
http://en.wikipedia.org/w/index.php?title=File:Company_of_ladies_watching_stereoscopic_photographs_by_Jacob_Spoel_1820-1868.jpg License: Public Domain Contributors: Infrogmation,
Jan Arkesteijn, KTo288, Vincent Steenberg, 1 anonymous edits
Image:Charles Wheatstone-mirror stereoscope XIXc.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Charles_Wheatstone-mirror_stereoscope_XIXc.jpg License: Public Domain
Contributors: Original uploader was Nieborak at pl.wikipedia
File:Early bird stereograph2.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Early_bird_stereograph2.jpg License: Public Domain Contributors: E.R. McCollister
Image:3dviewer.gif Source: http://en.wikipedia.org/w/index.php?title=File:3dviewer.gif License: Creative Commons Attribution-Share Alike Contributors: Samu3d David Samuel
File:View-Master Model E.JPG Source: http://en.wikipedia.org/w/index.php?title=File:View-Master_Model_E.JPG License: Creative Commons Attribution-Sharealike 3.0 Contributors:
ThePassenger
Image:EmaginZ800.jpg Source: http://en.wikipedia.org/w/index.php?title=File:EmaginZ800.jpg License: GNU Free Documentation License Contributors: User Psoreilly on en.wikipedia
File:Xpand LCD shutter glasses.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Xpand_LCD_shutter_glasses.jpg License: Creative Commons Attribution-Sharealike 3.0
Contributors: Original uploader was Amidror1973 at en.wikipedia
File:REALD.JPG Source: http://en.wikipedia.org/w/index.php?title=File:REALD.JPG License: Public Domain Contributors: Midori iro
File:Anaglyph glasses.png Source: http://en.wikipedia.org/w/index.php?title=File:Anaglyph_glasses.png License: Creative Commons Attribution-ShareAlike 3.0 Unported Contributors:
Snaily
File:Farbfilterbrille mit Minilinsen.png Source: http://en.wikipedia.org/w/index.php?title=File:Farbfilterbrille_mit_Minilinsen.png License: GNU Free Documentation License Contributors:
Stefan Khn Original uploader was Stefan Khn at de.wikipedia
Image:Bild-OpenKMQKit1.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Bild-OpenKMQKit1.jpg License: Public domain Contributors: Thomas Kumlehn
File:Nintendo-3DS-AquaOpen.png Source: http://en.wikipedia.org/w/index.php?title=File:Nintendo-3DS-AquaOpen.png License: Public Domain Contributors: Evan-Amos
Image:Stereo Realist.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Stereo_Realist.jpg License: Creative Commons Attribution-Sharealike 2.5 Contributors: Kim Scarborough
File:Sputnik stereo camera.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Sputnik_stereo_camera.jpg License: Creative Commons Attribution 3.0 Contributors: Bilby
Image:Fujiw3.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Fujiw3.jpg License: Creative Commons Attribution-Share Alike Contributors: John Alan Elson
Image:Stereogram guide cross-eyed.png Source: http://en.wikipedia.org/w/index.php?title=File:Stereogram_guide_cross-eyed.png License: Creative Commons Attribution-Sharealike 3.0
Contributors: Hyacinth
File:Stereo empire.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Stereo_empire.jpg License: Creative Commons Attribution 3.0 Contributors: User:El pak
File:Sterescopic image of Greenland X-3D by Volkan Yuksel DSC05293.JPG Source:
http://en.wikipedia.org/w/index.php?title=File:Sterescopic_image_of_Greenland_X-3D_by_Volkan_Yuksel_DSC05293.JPG License: Creative Commons Attribution-Sharealike 3.0
Contributors: User:Volkan Yuksel
file:3d glasses red cyan.svg Source: http://en.wikipedia.org/w/index.php?title=File:3d_glasses_red_cyan.svg License: Creative Commons Attribution-Sharealike 2.5 Contributors: Image
created using Inkscape by Daniel Schwen.
Image:Moonstereo1897.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Moonstereo1897.jpg License: Public Domain Contributors: Scanned by John Alan Elson
Image:Parlimit.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Parlimit.jpg License: Creative Commons Attribution-Share Alike Contributors: John Alan Elson
Image:LBMtDiabloclip.jpg Source: http://en.wikipedia.org/w/index.php?title=File:LBMtDiabloclip.jpg License: Creative Commons ShareAlike 1.0 Generic Contributors: Original uploader
was Leonard G. at en.wikipedia
Image:LBLFoothillsBWAna.jpg Source: http://en.wikipedia.org/w/index.php?title=File:LBLFoothillsBWAna.jpg License: Creative Commons ShareAlike 1.0 Generic Contributors:
Mdd4696, Mercurywoodrose, Schieber, Speck-Made
Image:Stereogram guide parallel.png Source: http://en.wikipedia.org/w/index.php?title=File:Stereogram_guide_parallel.png License: Public Domain Contributors: Hyacinth
Image:Wavellite3d.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Wavellite3d.jpg License: Creative Commons Attribution-Share Alike Contributors: John Alan Elson
License
65
License
Creative Commons Attribution-Share Alike 3.0 Unported
//creativecommons.org/licenses/by-sa/3.0/

S-ar putea să vă placă și