Sunteți pe pagina 1din 9

Aerial Photography

Introduction

Aerial photography has two uses that are of interest within the
context of this course: (1) Cartographers and planners take
detailed measurements from aerial photos in the preparation of
maps. (2) Trained interpreters utilize arial photos to determine
land-use and environmental conditions, among other things.

Although both maps and aerial photos present a “bird’s-eye” view


of the earth, aerial photographs are NOT maps. Maps are
orthogonal representations of the earth’s surface, meaning that
they are directionally and geometrically accurate (at least within
the limitations imposed by projecting a 3-dimensional object onto
2 dimensions). Aerial photos, on the other hand, display a high
degree of radial distortion. That is, the topography is distorted,
and until corrections are made for the distortion, measurements
made from a photograph are not accurate. Nevertheless, aerial
photographs are a powerful tool for studying the earth’s
environment.

Because most GISs can correct for radial distortion, aerial


photographs are an excellent data source for many types of
projects, especially those that require spatial data from the same
location at periodic intervals over a length of time. Typical
applications include land-use surveys and habitat analysis.

This unit discusses benefits of aerial photography, applications,


the different types of photography, and the integration of aerial
photographs into GISs.
Basic Elements of Air Photo Interpretation

Novice photo interpreters often encounter difficulties when


presented with their first aerial photograph. Aerial photographs
are different from “regular” photos in at least three important
ways:
objects are portrayed from an overhead (and unfamiliar)
position.

o very often, infrared wavelengths are recorded, and

o photos are taken at scales most people are


unaccustomed to seeing

These “basic elements” can aid in identifying objects on aerial


photographs.
Tone (also called Hue or Color) -- Tone refers to the relative brightness
or color of elements on a photograph. It is, perhaps, the most basic of
the interpretive elements because without tonal differences none of the
other elements could be discerned.

o Size—The size of objects must be considered in the


context of the scale of a photograph. The scale will help
you determine if an object is a stock pond or Lake
Minnetonka.

o Shape—refers to the general outline of objects. Regular


geometric shapes are usually indicators of human
presence and use. Some objects can be identified almost
solely on the basis of their shapes.

 the Pentagon Building

 (American) football fields

 cloverleaf highway interchanges


o Texture—The impression of “smoothness” or
“roughness” of image features is caused by the
frequency of change of tone in photographs. It is
produced by a set of features too small to identify
individually. Grass, cement, and water generally appear
“smooth”, while a forest canopy may appear “rough”.

o Pattern (spatial arrangement) -- The patterns formed by


objects in a photo can be diagnostic. Consider the
difference between (1) the random pattern formed by an
unmanaged area of trees and (2) the evenly spaced rows
formed by an orchard.

o Shadow—Shadows aid interpreters in determining the


height of objects in aerial photographs. However, they
also obscure objects lying within them.

o Site—refers to topographic or geographic location. This


characteristic of photographs is especially important in
identifying vegetation types and landforms. For example,
large circular depressions in the ground are readily
identified as sinkholes in central Florida, where the
bedrock consists of limestone. This identification would
make little sense, however, if the site were underlain by
granite.

o Association—Some objects are always found in


association with other objects. The context of an object
can provide insight into what it is. For instance, a nuclear
power plant is not (generally) going to be found in the
midst of single-family housing.

Advantages of Aerial Photography over Ground-


Based Observation
 Aerial photography offers an improved vantage
point.

 Aerial photography has the capability to stop


action.

 It provides a permanent recording.

 It has broader spectral sensitivity than the human


eye.

 It has better spatial resolution and geometric


fidelity than many ground-based sensing methods.

Types of Aerial Photography

Black and White

Color

Color Infrared

In 1903 or 1904 the first reliable black and white infrared film was
developed in Germany. The film emulsion was adjusted slightly
from regular film to be sensitive to wavelengths of energy just
slightly longer than red light and just beyond the range of the
human eye. By the 1930s, black and white IR films were being
used for landform studies, and from 1930 to 1932 the National
Geographic Society sponsored a series of IR photographs taken
from hot air balloons.

Throughout the 1930s and 1940s, the military was hard at work
developing color infrared film, eager to exploit it for surveillance.
By the early 1940s the military was successful in its attempts. It
developed a film that was able to distinguish camouflaged
equipment from surrounding vegetation. Within months, however,
an IR reflecting paint was developed for use on military vehicles,
effectively making IR film technology useless to the military. So,
they dropped it.
The scientific community, however, has made continuous use of
the film technology.

Color infrared film is often called “false-color” film. Objects that


are normally red appear green, green objects (except vegetation)
appear blue, and “infrared” objects, which normally are not seen
at all, appear red.

The primary use of color infrared photography is vegetation


studies. This is because healthy green vegetation is a very strong
reflector of infrared radiation and appears bright red on color
infrared photographs.

Applications of Aerial Photography

Introduction: The Scope of Air Photography

Land-Use Planning and Mapping

Geologic Mapping

Archaeology

Species Habitat Mapping

Integration of Aerial Photography into GIS

Latitude (shown as a horizontal line) is the angular distance, in


degrees, minutes, and seconds of a point north or south of the Equator.
Lines of latitude are often referred to as parallels.
Longitude (shown as a vertical line) is the angular distance, in degrees,
minutes, and seconds, of a point east or west of the Prime (Greenwich)
Meridian. Lines of longitude are often referred to as meridians.
Distance between Lines If you divide the circumference of the earth
(approximately 25,000 miles) by 360 degrees, the distance on the
earth’s surface for each one degree of latitude or longitude is just over
69 miles, or 111 km. Note: As you move north or south of the equator,
the distance between the lines of longitude gets shorter until they
actually meet at the poles. At 45 degrees N or S of the equator, one
degree of longitude is about 49 miles.
Minutes and Seconds For precision purposes, degrees of longitude
and latitude have been divided into minutes (‘) and seconds (“). There
are 60 minutes in each degree. Each minute is divided into 60 seconds.
Seconds can be further divided into tenths, hundredths, or even
thousandths.
For example, our office on Galveston Island, Texas, USA, is located at
29 degrees, 16 minutes, and 22 seconds north of the equator, and 94
degrees, 49 minutes and 46 seconds west of the Prime Meridian.

What is Georeferencing?

Georeferencing is the process of scaling, rotating, translating and deskewing the


image to match a particular size and position.

The term georeference will be familiar to GIS users, but general CAD users may
have never seen the word before, even though the function is very useful for their
work. The word was originally used to describe the process of referencing a map
image to a geographic location.

Coordinate system
Arrangement of reference lines or curves used to identify the location of
points in space. In two dimensions, the most common system is the
Cartesian (after René Descartes) system. Points are designated by their
distance along a horizontal (x) and vertical (y) axis from a reference
point, the origin, designated (0, 0). Cartesian coordinates also can be
used for three (or more) dimensions. A polar coordinate system locates
a point by its direction relative to a reference direction and its distance
from a given point, also the origin. Such a system is used in radar or
sonar tracking and is the basis of bearing-and-range navigation
systems. In three dimensions, it leads to cylindrical and spherical
coordinates.

CAMERA
Following is a description of main type of Cameras used in satellites.

The Strip Camera

Strip cameras record images by moving film past a fixed slit in the focal
plane as the camera is moved forward. The slit remains fixed, and
image is formed on the film as it moves past the open slit. This camera
is used in missions requiring object height determinations. It is used on
a aircraft based platform. Other uses are airport runaway inspection,
highway and rail road studies, selection of rights of way for pipelines
and power lines (Not in India, though), and for determination of tree
types for forestry applications.
The disadvanatge of this type of camera is that there can be a ‘banding’
effect on the strip photograph due to cyclic changes of exposure. Also,
since the slit is continuously open, if the film velocity is not steady, as
can happen due to aircraft vibrations, motion blurring will be introduced
in the photograph.

The Panchromatic Camera

This is the most widely used camera in satellite imagery applications.


Also called the single lens camera, this camera consists of usual optics
which focusses light on a CCD array. The CCD array converts the light
falling on it to voltage, which is then sampled and quantized to get the
actual bitstream, which represents the picture in the digital form.

This camera is also used in aircraft platform, but there instead of CCD
arrays, photography film is usually used. The Panchromatic camera is so
called because it can sense the radiation beyond the visible
wavelength(Infra Red ).The main characterstics of a panchromatic
camera are :

 Low Geometric distortion and can therefore be used for


photogrammetric purposes.

 A low distortion lens system is employed which is held in


position relative to plane of the film.
 A Frame of imagery is acquired with each opening of
camera shutters which is generally tripped at a set
frequency.

 Focal length usually varies from a few cms to more than


a metre,focal lengths of 150mm, 300mm and 450mm
are commonly used.

The panchromatic camera is most widely used camera in remote


sensing applications in general. It finds uses in photogrametry, forests
and land cover surveying, and for gathering visual and near IR band
data.

The Panoramic Camera

This is a camera designed to take the photographs of a wide area and


therefore it has a lens having a wide field of view. This enables the
camera to take the photograph of a large area, typically 40 to 50
kilometers in length and (also) breadth. There are different kinds of
panchromatic cameras, and the major types are listed below:

Wide Angle Lens Camera

This camera has a wide angle lens, hence the name. This allows us to
photograph a larger area in a photograph as compared to the normal
cameras.
Rotating Lens Camera

This type of camera has the film in a semi-circular assembly, and the
lens rotates in an arc, always keeping the same distance from the film,
thus maintaining the focus. As the lens rotates, It receives reflected light
from the surface and focusses on the film through a slit. This allows the
camera to take a picture in an arc of 180 Deg.
Rotating Prism Camera
This is nearly the same as the above type, the only difference is that
whilst the lens remains stationary, a rotating prism is used to focus the
light.

The panoramic camera is able to cover a big area in a single photograph


with clear details. However, due to the fact that the image is being
taken over a larger area introduces distortion due to differing conditions
of the weather in different parts of the image. Also a geometric
distortion is also introduced due to the constructional features of the
camera. So while the panoramic camera is very useful for preliminary
surveys, it cannot be used for photogrammetry.

The Multi Lens Camera

This camera has four lenses each of which focusses light on its own film
roll. Each of these lens assemblies are identical except for the fact that
they have different filters. One has a Red Filter, One has a Green, one
has a Blue filter, and one has a Infra Red Filter. We can thus take
photographs of exactly the same area on the ground in four different
bands.

These photographs can be viewed in a special viewer in real or false


color or various combinations of filters to view a particular feature in
enhanced clarity. This camera is now falling out of favour mainly
because A combination of four identical cameras suitably coupled can
perform the same function, also the technology of multi-spectral
scanners have advanced to such an extent that now they are the
preferred instruments for this purpose.

S-ar putea să vă placă și