Documente Academic
Documente Profesional
Documente Cultură
ACKNOWLEDGEMENT
We gratefully acknowledge the support of Prof. Anjana Vyas, for giving us the opportunity of
presenting the papers. Thanks also to the anonymous referees who provided very useful
comments on earlier drafts. Responsibility for contents, of course, rests with the readers.
-Readers
1
GIS Reader
2
GIS Reader
3
GIS Reader
4
GIS Reader
Aerial Photography:-
Although the first, rather primitive photographs were taken as "stills" on the ground, the idea of
photographing the Earth's surface from above, yielding the so-called aerial photo emerged in the
1860s with pictures from balloons. From then until the early 1960s, the aerial photograph
remained the single standard tool for depicting the surface from a vertical or oblique perspective.
The aerial photography refers to photographs taken from air-borne platforms. It can be classified
under the following heads:
5
GIS Reader
Space Imaging:-
The space imaging can be classified under following heads
• Space platform
• Sensors
• Interpretation equipments
Space Platforms:
The platform used for space imaging is a space craft. Space remote sensing started in right earnest
during the period 1950, however, the launching of sputnik-I spacecraft by Russia in 1957 started a
new era in remote sensing.
The systematic observation and imaging of earth surface from orbiting satellites started in 1960s.
The launch of first Earth Resources Technology Satellite ERTS-1 (later known as Landsat-1) in
July of 1972 was undoubtly the greatest advancement in earth orbital photography. The first
American space workshop took over 35,000 images of earth with six camera multispectral array, a
long focal length earth terrain camera, a thirteen channel multispectral scanner and two
microwave systems. After that between Landsat-2 on January 22, 1975 and Landsat-5 on March
1, 1984 introduced a new generation of earth resources satellites with improved spatial resolution,
radiometric sensitivity and faster data supply rate.
Indian Remote sensing Satellite (IRS-1) was launched on 1986 which was equipped with
multiband equipments. This is the first launch of India and later launches are discussed below.
Sensors:
Landsat series of satellites carried mainly three sensor systems, viz multi spectral scanner (MSS),
return beam vidicon (RBV) camera and thematic mapper. The basic sensor system in this case
was a linear array of charged couple devices (CCD). Similar sensors would also be carried by
SPOT and Indian Remote Sensing Satellite.
Interpretation equipments:
The simple equipments for visual interpretation of satellite imageries include mirror stereoscope,
magnifying glass and light table.
Development in India:
We, India are seventh nation to achieve orbital capability in July 1980, India is pressing ahead
with an impressive national programmed aimed at developing launchers as well as nationally
produced communications, meteorological and Earth resources satellites. Prof U.R. Rao,
Chairman of the ISRO in October 1984, said that space technology had given India the
opportunity to convert backwardness into an asset; developing countries could bypass the
intermediate technology stage and leapfrog into the high technology area. Like France, India has
benefited from simultaneous co-operation with the CIS/USSR, the US and ESA.
India's launchers:
Indian Space Research Organization (ISRO) carried out its first successful SLV-3 launch on 18
July 1980, thus adding India to the list of space-faring nations. The current generation of
launchers, by means of the PSLV (Polar Satellite Launch Vehicle), fully successful on its second
attempt in October 1994, provides a capability of placing a 1 ton class IRS satellite into a Sun-
synchronous orbit, and is now offered commercially through the Antrix Corporation. An upgrade
into the Geostationary SLV (GSLV) is underway to satisfy a 2.5 ton class launch capability into a
geostationary orbit by 1998-99.
Sl Date of Launch Remarks
Satellite Status
No Launch Vehicle
Mission It carried two sensors, the
17 March Vostok,
1 IRS 1A Completed LISS-1 (Linear Imaging Self-
1988 USSR
(Retired from Scanning System, 72.5-meter
6
GIS Reader
Conclusion:
With the satellites designed and built by India in the INSAT and IRS series, the country has
started to reap the benefits of space technology for developmental applications, specifically in the
areas of communication, broadcasting, meteorology, disaster management and the survey and
management of resources. The planned launches of more powerful satellites will further enhance
and extend the benefits of space technology. The successful launch of PSLV and the progress
made in the development of GSLV give confidence in the capability of India to launch the IRS
and INSAT class of satellites from its own soil. Thus, India today has a well-integrated and self-
supporting space programme which is providing important services to society.
7
GIS Reader
8
GIS Reader
Some definition:
Wave length (λ) = Mean distance between successive maximum or minimum wave peaks. The
most common unit used to measure wavelength is the micrometer (μm).
Frequency (ν) = Number of wavelengths that pass a fixed point per unit time. It’s most frequently
used unit is hertz (Hz.)
The relationship between wave length & frequency of electromagnetic radiation may be expressed
by following formula:
C = λν --------------------------------- (1)
C = Speed of light (3 X 108 m/sec.)
From the above formula it is noted that, frequency is inversely proportional to wavelength,
the higher the frequency, the shorter the wavelength & vise-versa.
Principles of EMR:
EMR occurs as a continuum of wave lengths & frequencies from short wavelength, high
frequency to long wavelength to low frequency. This is known as the electromagnetic spectrum.
Visible portion of the electromagnetic spectrum for human eyes ranges from wavelength of about
0.4 μm. to 0.7 μm. Wavelength of the color blue is ascribed to the approximate range of 0.4 to
0.5 μm. The color green is ascribed to the wavelength ranges from 0.5 to 0.6 μm & red to 0.6 to
0.7 μm. Wavelength of the Ultraviolet (UV) energy adjoins the blue end of visible portion of the
spectrum. Wave length of the Infrared (IR) wave adjoins red end of visible portion of the
spectrum. According to their wavelength IR waves are classified as near IR (from 0.7 to 1.3 μm),
mid IR (from 1.3 to 3 μm) and thermal IR (beyond 3 to 14 μm).
9
GIS Reader
The relationship between the frequency & energy of quanta expressed as follows:
Q = hν ------------------------ (2)
Where, Q = Energy of a quantum measured in Joules (J)
H = Planck constant (6.626 X 10-34 J-sec.)
By substituting, ν = C/λ (from equation no. 1) in equation no 2.
λ = hC/ Q --------------------- (3)
From the above equation no. 3 it’s clear that the energy of a quantum is inversely proportional to
its wavelength, i.e. the longer the wavelength involved, the lower its energy content & vise-versa.
This relationship has very important implications to remote sensing because it suggest that it is
more difficult to sense longer wavelength energy such as microwave emission than shorter
wavelength energy such as thermal IR by the sensor.
Substances may have color because of their differences in energy levels.
Sources of electromagnetic radiation energy:
The Sun is the main initial source of EMR recorded by the remote sensing system. Although all
objects above absolute zero (-273° C or 0 K) radiate EMR, including water, soil, vegetation etc.
The thermonuclear fusion taking place on the surface of the Sun yields a continuous spectrum of
electromagnetic energy. The 6000 K temperature of this process produces a large amount of
relatively shorter wavelength (dominantly 0.483 μm) energy that travels through the vacuum of
space & atmosphere of the earth at the speed of light. Some of this energy is intercepted by the
earth surface. The earth may reflect some of the energy directly back out to space or it may absorb
the short wavelength energy & then reemit it at a longer wavelength.
10
GIS Reader
& spectral distribution, even its direction of propagation also. Although net effect of the
atmosphere varies according to their path length & magnitude of energy level of EMR being
sensed. Atmospheric condition also influenced the EMR. Scattering & absorption of EMR in the
atmosphere are the primary cause of these effects.
Scattering:
The atmosphere contains aerosol particles & gas molecules that scatter the electromagnetic energy
according to their wavelength. Aerosol particles such as water vapor, suspended particulate mater
(SPM) & smoke etc. in the atmosphere tried to scatter the EMR. Scattering causes change in the
direction & intensity of radiation. Generally, scattering decreases with the increase in wavelength
of EMR. Therefore, ultraviolet radiation near the blue end (0.4 – 0.5 μm) of the visible portion is
scattered much more than the radiation in the longer visible wavelengths. Consequently, we see
blue sky in a clear day.
Absorption:
Atmospheric absorption results in the effective lass of electromagnetic energy even more than
scattering. The gas molecules such as water vapor (H2O), carbon dioxide (CO2), SPM, ozone
(O3) etc. absorb considerable amount of EMR. However, absorption is selective by wavelengths.
EMR with wavelength shorter than 0.3 μm is completely absorbed by the ozone (O3) in the upper
atmosphere, whereas water particles in clouds absorb EMR at wavelengths less than about 0.3
μm.
While dealing with the energy interaction of the EMR with surface features, we have to consider
two points like material type and the condition of the object and the variation in the wave length
of EMR spectrum. Because these factors determine the proportions of energy reflected, absorbed,
and transmitted. Thus two objects may be indistinguishable in one spectral range and vary in
another wave length band. Most of the remote sensing systems operate in the wavelength regions
in which reflected energy predominates, the reflectance properties of earth features are very
important. Hence it is useful to of the e energy balance relationship expressed by previous
equation in the form
Er (λ) = [Ei (λ) +Et (λ)]
That is reflected energy is equal to the energy incident on a given feature reduced by the energy
that is either absorbed or transmitted by that feature.
The geometric manner in which an object reflects energy is also an important consideration and it
depends upon the roughness of the object. Specular reflectors are flat surfaces that manifest mirror
like reflections, where the angle of reflection equals the angle of incidence. Diffuse reflectors are
rough surfaces that reflect uniformly in all directions. Most of the surfaces are neither perfectly
specular nor diffuse reflectors. Their characteristics are somewhat between the two extremes.
Reflection of the EMR is dictated by surface roughness in comparison to the wavelength of the
energy incident upon it. When the wavelength of incident energy is much smaller than the surface
height variations or the particle sizes that make up a surface, the reflection from the surface is
diffuse.
11
GIS Reader
Diffuse reflections contain spectral information on the colour of the reflecting surface, whereas
specular reflectances do not. Hence, in remote sensing, we are most often measuring the diffuse
reflectance properties of terrain features.
The reflectance characteristics may be measured by the portion of energy that is reflected. It is
mathematically defined as ρλ= Er (λ)/Ei (λ) or (energy reflected from the object/ energy of
wavelength incident upon the object)X 100
Conclusion:
Better understanding about the electro magnetic spectrum is necessary for remote sensing.
Because in remote sensing we are utilizing various bands for getting aerial photographs and
satellite images. The nature of different bands and its interaction with the atmosphere should be
analysed to get proper results.
12
GIS Reader
Visible light, radio waves, heat, ultra violate rays and x-rays are various forms of
Electromagnetic energy. All this energy radiates in accordance with basic wave theory.
Wave theory describes electromagnetic energy as travelling in a harmonic, sinusoidal fashion at
the ‘velocity of light, c.’ where,
C=λv
Where v = wave frequency, the number of peaks passing a fixed points in space per unit time.
λ= wavelength, the distance from one wave peak to the next.
Since c is constant ( c= 3X 108), λ and v are inversely proportional to each other.
Electromagnetic waves are categorised by their wavelength location within the electromagnetic
spectrum.The unit to measure wavelength along the spectrum is the micrometer (µm).
Micrometer=1X10-6 m
ELECTROMAGNETIC SPECTRUM
Courtesy: http://chesapeake.towson.edu/data/all_electro.asp
When all of the possible forms of radiation are classified and arranged according to wavelength or
frequency, the result is the Electromagnetic Spectrum. The electromagnetic spectrum includes
types of radiation that range from extremely low energy, long wavelength, low frequency energy
like Radio energy to extremely high energy, short wavelength, high frequency energy types such
as x-ray and Gamma Ray radiation.
13
GIS Reader
The atmosphere can have a profound effect on among other things, the intensity and spectral
composition of radiation available to any sensing system.
These effects are caused principally through the mechanism of atmospheric scattering and
absorption.
1) Scattering
2) Absorption
Scattering:
Atmospheric scattering is the unpredictable diffusion of radiation by particles in the atmosphere.
1) Rayleigh scatter- Rayleigh scatter happens when the radiation interacts with atmospheric
molecules and other tiny particles that are much smaller in diameter than the wavelength
of interacting radiations.
The effect of Rayleigh scatter is inversely proportional to 4th power of wavelength. Hence short
wavelengths scatter much more than long wavelengths.
Rayleigh scatter is a primary cause of ‘haze’ in imagery. A photograph taken from high altitude
appears bluish grey.
2) Mie scatter- Mie scatter happens when atmospheric particle diameters essentially equal
the wavelength of the energy being sensed. Water vapour and dust are major causes of
Mie scatter.
3) Non selective scatter- Non selective happens when the diameters of the particles causing
scatter are much larger than the wavelengths of the energy being sensed. Water droplets
are major cause of such scatter.
Absorption:
Atmospheric absorption results in the effective loss of energy to atmospheric constituents. Some
examples of the most efficient absorbers of solar radiation are water vapors, carbon dioxide and
ozone. The wavelength ranges in which the atmosphere is particularly transmissive of energy are
referred to as ‘atmospheric windows’.
The interaction and the interdependence between the primary sources of electromagnetic energy,
the atmospheric index through which source energy may be transmitted to and from earth surface
features and the spectral sensitivity of the sensors available to detect and record the energy. The
choice of spectral range of the sensor has to be based on the manner in which the energy interacts
with the features under investigation.
14
GIS Reader
Courtesy: http://chesapeake.towson.edu
Proportions of energy reflected, absorbed, and transmitted will vary for different earth features,
depending on their material type and conditions. within a given feature type, proportions of
reflected, absorbed and transmitted energy will vary at different wavelengths. Within the visible
portion of the spectral , these spectral variations result in the visual effect called ‘colour’.
ρλ = ER(λ) / EI (λ)
=(energy of wavelength λ reflected from the object / energy of wavelength incident upon
the object )x 100
Reflection is a function of the surface roughness of the object. Specular reflectors are flat surfaces
that manifest mirrorlike reflections, where the angle of reflection equals the angle of incidence.
Diffuse of Lambertian reflectors are rough surfaces that reflect uniformly in all
directions.geometric character of specular, near specular, near diffuse and diffuse reflectors. the
surface’s roughness in comparison to the wavelength of the energy incident upon it. When the
wavelength of incident energy is much smaller than the surface height variations or the particle
sizes that make up a surface, the reflection from the surface is diffuse.
Courtesy: http://chesapeake.towson.edu/data/all_electro.asp
Diffuse reflections contain spectral information on the colour of the reflecting surface, whereas
specular reflections do not. The reflectance characteristics of earth surface features may be
qualified by measuring the portion of incident energy is reflected. Function of wavelength if
called the spectral reflectance.
Above photographs show an example of remote sensing techniques tha relies on high energy
radiation is to compare views of Sun in various spectral bands.
Spectral reflectance of an object as a function of wavelength is termed as spectral reflectance.
15
GIS Reader
a) Chlorophyll strongly absorbs energy in the wavelength bands centered at about 0.45 and
0.67 µm. Hence, our eyes perceive healthy vegetation as green in colour because of the
very high absorption of blue and red energy by plant leaves and the very high reflection
of green energy.
a. Some of the factors affecting soil reflectance are moisture content, soil texture , surface
roughness , presence of iron oxide and organic matter content. These factors are complex
variable and interrelated. Soil moisture content is strongly related to the
b. Soil texture: coarse, sandy soils are usually well drained, resulting in low moisture
content and relatively high reflectance; poorly drained fine-textured soils will generally
have lower reflectance. Coarse textured soils will appear darker than fine textured soils.
a. Clear water absorbs relatively little energy having wavelengths less than about 0.6 µm.
Reflectance changes with change in turbidity, chlorophyll concentration of water.
• Spectral responses measured by remote sensors over various features often permit an
assessment of condition of the features. These responses are known as spectral signatures.
16
GIS Reader
• The features which show different characteristics at different geographic locations and
given point of time cause spatial effects.
2) Diffused skylight
Irradiance varies with the seasonal changes in solar elevation angle and the changing distance
between earth and sun.
17
GIS Reader
The data analysis process involves examining the data using various viewing and interpretation
devices to analyze pictorial data and/or a computer to analyze digital sensor data. With the aid of
reference data, the analyst extracts information about the type, extent, location and condition of
various resources over which the sensor data were collected. This data is then compiled generally
in the form of hard copy maps and tables or as computer files that can be merged with other layers
of information in the geographic information system (GIS). Finally, the information is presented
to the users who apply it to their decision making process.
Energy Sources and Radiation Principles
Visible light refers to only one of the many forms of electromagnetic energy, others being radio
waves, heat, ultraviolet waves and x-rays. All this energy is assumed to be inherently similar,
radiating in accordance with the basic wave theory.
In remote sensing, electromagnetic waves are categorized by their wavelength location within the
electromagnetic spectrum. Most prevalent units to measure wavelength along the spectrum are
micrometer (µm) - a unit of length equivalent to one-millionth of a meter or, nanometers (nm), a
unit of length equivalent to one-billionth of a meter.
Although names (such as ultraviolet and microwave) are generally assigned to regions of the
electromagnetic spectrum for convenience, there exists no clear cut dividing line between one
nominal spectral region and the next. At the very energetic, (high frequency; short wavelength)
end are gamma rays and x-rays. Radiation in the ultraviolet region extends from about 1 nm to
about 0.36 µm. It is convenient to measure the mid-regions of the spectrum in these two units:
micrometers (µm), a unit of length equivalent to one-millionth of a meter or, nanometers (nm), a
unit of length equivalent to one-billionth of a meter. The visible region occupies the range
between 0.4 and 0.7 µm or, its equivalents of 400 to 700 nm. The infrared region (IR), spans
between 0.7 and 100 µm. At shorter wavelengths (near 0.7 µm) infrared radiation can be detected
by special film, while at longer wavelengths it is felt as heat.
Major regions of electromagnetic spectrum
Region name Wavelength Comments
Gamma ray < 0.03 nm Entirely absorbed by the earth’s atmosphere and not
available for remote sensing
18
GIS Reader
Most common sensing systems operate in one or several of the visible, IR or microwave portions
of the spectrum. Within the IR portion of the spectrum, it should be noted that only thermal IR
energy is directly related to the sensation of heat, near and mid – IR energy are not.
Also, as per the wave theory – the longer the wavelength involved the lower is its energy content.
This has important implications in remote sensing from the standpoint that naturally emitted long
wavelength radiation, such as microwave emission from terrain features, is more difficult to sense
that radiation of shorter wavelengths, such as emitted thermal IR energy.
The sun remains the most obvious source of electromagnetic radiation for remote sensing.
However, all matters at temperatures above absolute zero (0K or – 273oC) continuously emit
electromagnetic radiation, e.g. terrestrial objects. However, the energy radiating from an object,
among other things is a function of surface temperature (as expressed by Stefan – Boltzmann law)
affecting the spectral distribution in due course.
The earth’s ambient temperature (i.e. temperature of surface materials such as soil, water and
vegetation) is about 300 K (27oC). This radiance from earth features, thus occurs at a wavelength
of 9.7 µm (as per Wien’s Displacement law) and is termed as “thermal infrared” energy. This
wavelength energy emitted by ambient earth features can be observed only with a non-
photographic sensing system.
Certain sensors, such as radar systems, supply their own source of energy to illuminate features of
interest. These systems are termed as ‘active systems’, in contrast to ‘passive systems’ that sense
naturally available energy.
19
GIS Reader
effect of Rayleigh scatter is inversely proportional to the fourth power of wavelength leading to a
stronger tendency to scattering of short wavelengths than long wavelengths. E.g. Sky appearance
as blue during the day time.
Mie Scatter – occurs when atmospheric particle diameter essentially equal the wavelengths of the
energy being sensed. This type of scatter tends to influence longer wavelengths. Water vapour
and dust are major causes of Mie scatter.
Non-selective scatter – occurs when the diameters of particles causing scatter are much larger
than the wavelengths of the energy being sensed. Water droplets, for example cause such scatter.
This scattering is non-selective with respect to wavelength. E.g. White appearance of clouds and
fog in visible wavelengths.
Absorption
In contrast to scatter, atmospheric absorption results in the effective loss of energy to atmospheric
constituents. This normally involves absorption of energy at given wavelengths by mostly water
vapour, carbon dioxide and ozone. Absorption of electromagnetic energy at specific wavelengths
by these gases strongly influence, ‘where we look’ spectrally with any given remote sensing
system.
2 Atmospheric windows:
Referred to as the wavelengths ranges in which, the atmosphere is particularly transmissive of
energy. Remote sensing data acquisition is limited to the non-blocked spectral regions – the
atmospheric windows. The spectral sensitivity range of the eye coincides with both an
atmospheric window and peak level of energy from the sun. Emitted heat energy from the earth is
sensed through the windows at 3 to 5 µm and 8 to 14 µm using such devices as thermal scanners.
Multispectral scanners sense simultaneously through multiple, narrow wavelength ranges that can
be located at various points in the visible through the thermal spectral region. Radar and passive
microwave systems operate through a window in the region 1mm to 1m. The important point to
note is the intersection and interdependence between the primary sources of electromagnetic
energy, the atmospheric windows through which source energy may be transmitted to and from
earth surface features, and the spectral sensitivity of the sensors available to detect and record the
energy.
20
GIS Reader
as a spectral reflectance curve. The configuration of spectral reflectance curves gives us insight
into the spectral characteristics of an object and has a strong influence on the choice of
wavelength region in which remote sensing data are acquired for a particular application.
Experience shows that many earth features of interest can or cannot be identified, mapped and
studied on the basis of their spectral characteristics. This makes it necessary to know and
understand the spectral characteristics of the particular and the factors influencing these
characteristics under investigation any given application.
Soil:
As shown in figure, soil shows considerably less peak and valley variation in reflectance, i.e.
factors that influence soil reflectance act over less specific spectral bands. Some of the factors
21
GIS Reader
affecting soil reflectance are moisture content, soil texture, surface roughness, presence of iron
oxide and organic matter content. E.g. the presence of moisture in soil will decrease its
reflectance. Soil moisture content is strongly related to the soil texture: coarse, sandy soils are
usually well drained resulting in low moisture content and relatively high reflectance; poorly
drained fine textured soils will generally have lower reflectance. In the absence of water, the soil
itself might exhibit reverse tendency. Coarse textured soils will appear darker than fine textured
soils. Two other factors that reduce soil reflectance are surface roughness and content of organic
matter.
Water:
Considering the spectral reflectance of water, probably the most distinctive characteristic is the
energy absorption at near IR (NIR) wavelengths and beyond, i.e. water absorbs energy in these
wavelengths irrespective of its typology such as lake, streams or water contained in vegetation or
soil. Locating and delineating water bodies with remote sensing data are done most easily in near
IR wavelengths because of this absorption property.
Clear water absorbs relatively little energy having wavelengths less than about 0.6 µm. High
transmittance typifies these wavelengths with a maximum in the blue-green portion of the
spectrum.
Based on the change in the turbidity of water, (because of the presence of organic or inorganic
materials), transmittance changes and therefore reflectance changes dramatically. For e.g. Water
containing large quantities of suspended sediments resulting from soil erosion usually has much
higher visible reflectance then other “clear” water in the same geographic area. Likewise, the
reflectance of clear water changes with the chlorophyll concentration tend to decrease water
reflectance in blue wavelengths and increase it is green wavelengths. These changes have been
used to monitor the presence and estimate the concentration of algae via remote sensing data.
Many water important characteristics, such as dissolved oxygen concentration, pH and salt
concentration, cannot be observed directly through changes in water reflectance. However, such
parameters sometimes correlate with observed reflectance. In short, there are many complex
inter-relationships between the spectral reflectance of water and particular characteristics which
requires one to use appropriate reference data to correctly interpret measurements made over
water.
22
GIS Reader
analyzing spectral reflectance properties of earth features but also be the keys to gleaning the
information sought in an analysis. E.g. – the process of change detection is premised on the
ability to measure temporal effects. An example of this being the change in suburban
development near a metropolitan area by using data obtained on two different dates.
An example of a useful spatial effect is the change in the leaf morphology of trees when they are
subjected to some form of stress. So, even though a spatial effect may complicate the analysis, at
times this effect may add just what is important in a particular application.
The dominance of sunlight versus skylight in any given image is strongly dependent on
weather conditions, while irradiance varies with the seasonal changes in solar elevation angle and
the changing distance between the earth and the sun.
23
GIS Reader
Panchromatic sensors cover a wide band of wavelengths in the visible light or near infrared light
spectrum. An example of a single band sensor of this type would be a black and white
photographic film camera.
Multispectral sensors cover two or more spectral bands simultaneously typically from 0.3 m to 14
m wide.
Hyperspectral sensors cover spectral bands narrower than multispectral sensors. Image data from
several hundred bands are recorded at the same time offering much greater spectral resolution
than a sensor covering wider bands.
Two types of multispectral scanners are distinguished: the whiskbroom scanner and the
pushbroom scanner.
Whiskbroom Scanner
A combination of a single detector plus a rotating mirror can be arranged in such a way that the
detector beam sweeps in a straight line over the earth across the track of the satellite at each
rotation of the mirror. In this way, the earth’s surface is scanned systematically line by line as the
satellite moves forward. Because of this sweeping motion, the whiskbroom scanner is also known
as the across-track scanner. The first multispectral scanners applied the whiskbroom principle.
Today many scanners are still based on this principle: NOAA/AVHRR, Landsat/TM, IRS/LISS.
At any instant in time, the scanner sees the energy within the system’s instantaneous field of view
(IFOV). The IFOV is normally expressed as the cone angle within which incident energy is
focused on the detector.
2.pushbroom scanner
24
GIS Reader
Along-track scanners also use the forward motion of the platform to record successive scan lines
and build up a two-dimensional image, perpendicular to the flight direction. These systems are
also referred to as pushbroom scanners, as the motion of the detector array is analogous to the
bristles of a broom being pushed along a floor.
The pushbroom scanner is based on the use of Charged Coupled Devices (CCDs) for measuring
the electromagnetic energy. A CCD-array is a line of photo-sensitive detectors that function
similar to solid state detectors. A single element can be as small as 5 µm. Two-dimensional CCD
arrays used in remote sensing are more sensitive and have larger dimensions. The first satellite
sensor using this technology was SPOT-I HRV. High resolution sensors such as IKONOS and
Orbview3 also apply the pushbroom principle. This enables a longer period of measurement over
a certain area, resulting in less noise and a relatively stable geometry. Since the CCD elements
continuously measure along the direction of the platform this scanner is also referred to as along-
track scanner.
The pushbroom scanner records one entire line at a time. The principal advantage over the
whiskbroom scanner is that each position (pixel) in the line has its own detector.
Linear array CCDs are designed to be very small, and a single array may contain over 10,000
individual detectors. Each spectral band of sensing requires its own linear array. Normally, the
arrays are located in the focal plane of the scanner such that each scan line is viewed by all arrays
simultaneously. Linear array systems afford a number of advantages over across-track mirror
scanning systems. Firstly, linear arrays provide the opportunity for each detector to have a longer
dwell time over which to measure the energy from each ground resolution cell. This enables a
stronger signal to be recorded (and thus, a higher signal to signal noise ratio) and a greater range
in the signal levels that can be sensed, which leads to better radiometric resolution. In addition,
the geometric integrity of linear array systems is greater
because of the fixed relationship among detector elements
recording each scan line. The geometry along each row of
data (scan line) is similar to an individual photo taken by an
aerial mapping camera. The geometric errors introduced
into the sensing process by variations in the scan mirror
velocity of across-track scanners are not present in along-
track scanners. Because linear arrays are solid state micro-
electronic devices, along-track scanners are generally
smaller in size and weight and require less power for their
operation than across-track scanners. Also, having no
moving parts, a linear array system has higher reliability
and longer life expectancy.
25
GIS Reader
They use a linear array of detectors (A) located at the focal plane of the image (B) formed by lens
systems (C), which are "pushed" along in the flight track direction (i.e. along track). Each
individual detector measures the energy for a single ground resolution cell (D) and thus the size
and IFOV of the detectors determines the spatial resolution of the system. A separate linear array
is required to measure each spectral band or channel. For each scan line, the energy detected by
each detector of each linear array is sampled electronically and digitally recorded.
One disadvantage of linear array systems is the need to calibrate many more detectors. Another
current limitation to commercially available solid state arrays is their relatively limited range of
spectral sensitivity. Linear array detectors that are sensitive to wavelengths longer than the mid-
IR are not readily available.
Spectral Characteristics
To a large extent, the characteristics of a solid state detector are valid for a CCD-array. In
principle, one CCD-array corresponds to a spectral band and all the detectors in the array are
sensitive to a specific range of wavelengths. With current technologies, CCD array sensitivity
stops at 2.5 µm wavelength. If longer wavelengths are to be measured, other detectors need to be
used. One drawback of CCD arrays is that it is difficult to produce an array in which all the
elements have similar sensitivity. Differences between the detectors may be visible in the
recorded images as vertical banding.
Geometric Characteristics
For each single line, pushbroom scanners have a geometry similar to that of aerial photos (which
have a ‘central projection’). In case of flat terrain, and a limited total field of view (FOV), the
scale is the same over the line, resulting in equally spaced pixels. The concept of IFOV cannot be
applied to pushbroom scanners.
Typical for most pushbroom scanners is the ability for off-track viewing. In such a situation, the
scanner is pointed towards areas to the left or right of the orbit track (off-track) or to the back or
forth (along-track). This characteristic two advantages: it is used to produce stereo-images, and
it can be used to image an area that is not covered by clouds at that particular moment. When
applying off-track viewing, similar to oblique photography, the scale in an image varies and
should be corrected for.
3.Hyperspectral imaging
Hyperspectral imaging is a technique that combines both conventional imaging and spectroscopy.
Using this technology, both the spatial and spectral information of an object can be acquired. The
imaging produces 3D images or Hyperspectral image cubes and uses optical elements, lenses,
spatial filters and image sensors to capture the content at multiple wavelengths.
Almost all sensors that are multispectral in function have had to sample the EM spectrum over a
relatively wide range of wavelengths in each discrete band. These sensors therefore have low
spectral resolution. This mode is referred to as broad-band spectroscopy. Spectral resolution can
be defined by the limits of the continuous wavelengths (or frequencies) that can be detected in the
spectrum. In remote sensors an interval of bandwidth of 0.2 µm in the Visible-Near IR would be
considered low spectral resolution and 0.01 µm as high resolution. (The term has a somewhat
different meaning in optical emission spectroscopy, where it refers to the minimum spacing in µm
or Angstroms between lines on a photographic plate or separable tracings on a strip chart.)
26
GIS Reader
Rremote sensors that can have high spectral resolution are called hyperspectral imagers. With
these hyperspectral curves, it is practical now do rigorous analysis of surface compositions over
large areas. Moreover, data can be displayed either as spectral curves with detail similar to those
on the preceding page or as images similar to those obtained by Landsat, SPOT, etc. With spectral
curves we capture the valuable information associated with diagnostic absorption troughs, and
with images we get relatively pure scenes, colorized (through color compositing) from intervals
that represent limited color ranges in the visible or in false color for the near-IR (NIR).
Applications of multispectral scanner data are mainly in the mapping if land cover, vegetation,
surface mineralogy and and surface water. Multispectral scanners are mounted on airborne and
spaceborne platforms. A multi-spectral scanner operates on the principle of selective sensing in
multiple spectral bands. The range of multispectral scanners range from 0.3 to 14 µm.
27
GIS Reader
The technique has emerged as a very powerful method for continuous sampling of broad intervals
of the spectrum. Such an image consists of about a hundred or more spectral bands that are
adjacent to each other, and the characteristic spectrum of every target pixel is acquired. This
precise information enables detailed analysis of a dynamic environment or any object.
After years of being restricted to laboratories and the defense industry, the commercialization of
these technologies is well and truly underway. However, Frost & Sullivan expects the market to
grow at a slower rate over the short term when compared to the longer term, this is because of
various reasons including lack of awareness of the products, lack of competition and price.
However, advances in technology, data processing algorithms and the increase in competition are
expected to aid in strong penetration into various industrial verticals over the long term.
28
GIS Reader
6 Opto-Mechanical scanners
The imaging sensors board on the satellites were essentially opto-mechanical scanners. Most of
the limitations associated with photographic and TV imaging system are overcome in opto-
mechanical scanners. The principle of operation of an opto-mechanical scanner is shown
schematically in Figure.
The radiation emitted or reflected from the scene is intercepted by a scan mirror is inclined at 45
to the optical axis of the telescope. The telescope focuses radiation on to a detector. In this case,
the detector receives radiation from an area on the ground which is determined by the detector
size and focal length of the optics. This is called a picture element or a pixel. By rotating the scan
mirror the detector starts looking at adjacent picture elements on the ground. Thus, information is
collected pixel by pixel by the detector. If such an instrument is mounted on a moving platform-
like an aircraft or a spacecraft, such that the rotation of the scan mirror collects information from a
strip on the ground at right angles to the direction of motion of the platform and also if the
scanning frequency is adjusted such that by the time the platform moves through one picture
element the scan mirror is set to the start of the next scan line, then successive and contiguous
scan lines can be produced. Thus, in cross track direction information is collected from each pixel
(because of the scan mirror motion) to produce one line of image and in the along track direction
29
GIS Reader
successive lines of image in contiguous fashion are produced by the platform motion. The scan
frequency has to be correctly adjusted, depending on the velocity of platform, to produce a
contiguous image.
To produce multispectral imagery the energy collected by the telescope is channeled to a spectral
dispersing system-spectrometer. Such systems which can generate imagery simultaneously in
more than one spectral band are called Multispectral Scanners (MSS). The given figure gives the
functional block diagram of a multispectral scanner.
Thus, the MSS has got a scan mirror, collecting optics, dispersive system (which essentially
spreads the incoming radiation into different spectral bands) and a set of detectors appropriate for
the wavelength regions to be detected. The output of the detectors go through electronic
processing circuits. The data from the scene along with other data like attitude of the platform,
temperatures of the various subsystems etc. are formatted together and the combined information
is either recorded on a magnetic medium (as is usually the case with aircraft sensors) or
transmitted through telemetry for spacecraft sensors. Details of some of the major subsystems to
realize an opto-mechanical scanner are given below.
a) Scanning Systems: In an opto-mechanical imager, the scanning can be carried out either
in the object plane or in the image plane. In the image plane scanner, the scan mirror is
kept after the collecting optics near to the focal plane and the mirror directs each point in
the focal plane to the detector. Obviously such a system requires the collecting optics
corrected for the total field of view, which is quite difficult, especially if a reflective
system has to be used. However, it requires relatively smaller size of the scan mirror.
Though image plane scanning has been used in some of the early opto-mechanical
multispectral scanners due to large field correction required for the total field of view,
image plane scanning is not generally used. Moreover, due to availability of linear array
CCDs the scope of image plane scanning using mechanical systems is decreasing. In the
object plane scanning the rays from the scene fall on to the scan mirror, which reflects
the radiation to the collecting telescope. Here the direction of rays at the collecting optics
remains same irrespective of the scan mirror position. Thus when object plane scanning
is used the collective optics need only be corrected for a small field around the optical
axis. The extent of field correction depends on IFOV, and the distribution of detectors in
the focal plane for reducing scanning frequency or for additional spectral bands.
30
GIS Reader
Of the 3 configurations the cassegrain system has the smallest tube length for the same
effective focal length and primary mirror diameters. Since it is desirable to keep the tube
length minimum, in order to reduce weight and volume the space borne opto mechanical
scanners generally use the cassegrain configuration as collecting telescope.
c) Spectral dispersion system – the spectral dispersion system could be the commonly used
systems like grating or a prism. There are special beam splitters which selectively
transmit/reflect a particular band of wavelength. T he usage of such beam splitters and
appropriate band pass filters at the detector, facilitates specific spectral band selection.
d) Detectors – different types of detectors are available to cover the entire OIR region. The
detector selection among other things depends on the required spectral response, specific
detectivity, responsivity and response time. The detectors are mainly of 2 types –
Useful spectral ranges for typical detectors (operating temperature of all detectors is 300 K unless
noted).
31
GIS Reader
Thermal detectors – The thermal detector s absorb radiant energy raising the detector temperature
and a parameter of the device which changes with the temperature is detected viz resistance in
case of bolometer, voltage in case of thermocouple.
Quantum Detectors – in the quantum detectors the absorbed photons excite electrons into the
conduction band, changing the electrical characteristics of the responsive elements or the
electrons emitted.
The multispectral scanner system on board the NASA earth resources technology satellite
LANDSAT-1 popularly known as MSS was the first operational satellite-borne opto mechanical
scanner for civilian applications.
Thematic Mapper:-
The thematic mapper is an advanced second generation optomechanical multispectral scanner first
carried onboard LANDSAT-4 TM provides 7 narrow spectral bands covering visible , near
infrared, middle infrared and thermal infrared spectral regions with a 30 m resolution in the
visible, near and middle- infrared bands nad 120 m resolution in the thermal infrared.apart from
the improved spatial and spectral resolution TM provides a factor of 2 improvement over MSS in
the radiometric sensitivity.
The very high resolution radiometer ( VHRR) onboard INSAT is also an opto mechanical
scanner. In this case since the satellite is geostationary and 3 axis stabilized a 2 axis scan mirror is
used to take care of the lack of relative motion of the platform and scene.
32
GIS Reader
2. Platforms:
As a broad definition platforms can be defined as the vehicles to carry the sensor. It is a stage to
mount or carry a sensor or a camera to acquire the information of the earth’s surface. It is based
on the altitude above the earth’s surface. Three different types of platforms are used which collect
the data or information from earth’s surface and transmit it to an earth receiving station, for their
further analysis and interpretation.
Types of platforms:
3. Ground Observation Platform
4. Air Borne Observation Platforms
5. Space Borne Observation platforms
33
GIS Reader
2. Cherry pickers
3. Portable Masts
4. Hand held platforms
To collect the ground truth, for laboratory and field experiments, portable hand held photographic
cameras and spectro radiometers are used.
To work on high altitudes i.e. about 15 mts from ground cherry pickers with their automatic
recording sensors can be used.
Towers can be raised for placing the sensors at a greater height for observation. Towers can be
dismantled and moved from place to place.
For testing or collecting the reflectance data from field sites portable masts mounted on vehicles
are used. These masts are used to support camera and other sensors.
e.g. Automated data collection platform instrumented to provide data on stream flow
characteristics.
34
GIS Reader
Drone:
It is a pilot less vehicle more like a miniature aircraft which is remotely controlled from the
ground station. It has a climb rate of 4m/s with an operating altitude of about 0.5 km, a forward
speed of about 100 km/hr and it can also exhibit hovering flight. This sensor platform has a
central body in the shape of a circular tube for carrying the engine, propelling fan, fuel tank and
the sensor system. The tail of the drone has small wing structures and a tail plane with control
mechanisms. The servo meter systems, operating the aerodynamic controls, receive signals related
to the altitude and position of the aerial vehicle from sensors within the drone and from the
ground.
The function of the drone sensors is to maintain the altitude (of the drone) demanded by the
ground control or by a self-contained navigation system. Drone’s payload includes equipment of
photography, infrared detection, radar observation and TV surveillance. The unique advantage of
such a device is that it could be accurately be accurately located above the area for which data
was required. It is an all weather type of platform capable of both night and day observation.
35
GIS Reader
These are essentially satellite platforms. Since, there are no atmospheric hindrances in space, the
orbits for the space platforms can be defined. The entire earth or a part of it can be covered at
specific intervals.
The mode can be geo stationary – permitting continuous sensing of a portion of the earth or sun
synchronous with a low altitude polar orbit covering the entire earth at the same equator crossing
time. Space borne platforms can also be used to view extraterrestrial bodies without interference
from the earth’s atmosphere.
Synoptic coverage of the earth on a periodic basis with low maintenance expenses is very useful
for natural resource management.
Although the initial investment cost is high but still spacecraft remote sensing is cheaper than
aircraft remote sensing on account of global repetitive service. Since the altitude of an orbiting or
geostationary satellite is very high, the resolution is poor.
Space borne platforms can be classified into following:
1. Low altitude satellites
2. High altitude geostationary satellites
3. Space shuttles
Satellites launched at an altitude of 36,000 km, the angular velocity of the satellite being equal to
that of the earth are called geo stationary satellites. These satellites are stationary over a certain
area and continuously watch the entire hemispherical disc. The coverage is about 1/3 of the earth,
so only 3 satellites are needed to cover the entire earth.
These satellites are mainly used for communication purposes, meteorological applications and for
earth resource management.
Usually the satellites can be classified into two categories:
1. Manned satellite platforms – These are used for rigorous testing of the remote sensors
on board so that they can be finally incorporated in the unmanned satellites.
2. Unmanned satellite platforms – These satellites are space observatories which provide
suitable environment in which the payload can operate the power to perform, the means
of communicating the sensor acquired data and space craft status to the ground stations
and a capability of receiving and acting upon commands related to the space craft control
and operation. The satellite mainframe subsystem includes – the structural sub system,
orbit control sub system, altitude measurement subsystem, power sub system, thermal
control sub system, the telemetry, and storage and telecommand sub system.
36
GIS Reader
A satellite is an object that orbits around another object in space. There are two types of satellites:
Natural and Man-made (Artificial).
Artificial satellites are man-made robots that are purposely placed into orbit around Earth by
rocket launchers. These satellites perform numerous tasks in communication industry, military
intelligence and scientific studies on both Earth and space.
There are many characteristics that describe any given satellite remote sensing system and
determine whether or not it will be suitable for a particular application. Among the most
fundamental of these characteristics is the satellite’s orbit. Satellites can operate in several types
of Earth orbit. The most common orbits for environmental satellites are
• Geo-stationary satellite orbit: A geostationary satellite, thus, completes one orbit around
the earth in the same amount of time needed for the earth to rotate once about its axis and
remains in a constant relative position over the equator.
• Polar orbit: An orbit with an inclination close to 90° is referred to as near polar because
the satellite will pass near the north and south poles on each orbit.
A satellite in orbit about a planet moves in an elliptical path with the planet at one end of
the foci of the ellipse. As well as providing a synoptic view of regional relationships the satellite
platform can be put into orbit in such a fashion that it will provide repeated coverage of the whole
of the Earth’s surface. Important elements of the orbit include its altitude, period, inclination and
equatorial crossing time.
Orbital Altitude:
Most earth observation satellites have altitudes more than 400 km above the earth surface, while
some operate at approximately 36000 km altitude. The first of these groups are mostly ‘polar or
near-polar orbiting satellites’ (low level satellites) occupying so called ‘sun synchronous orbits’;
the second group are ‘geostationary satellites’ (high level satellites).
Orbit Inclination:
The inclination of a satellite’s orbit refers to the angle at which it crosses the equator. An orbit
with an inclination close to 90° is referred to as near polar because the satellite will pass near the
north and south poles on each orbit. An ‘equatorial orbit’, in which the spacecraft’s ground track
follows the line of the equator, has an inclination of 0°. Two special cases are sun-synchronous
orbits and geostationary orbits.
40
GIS Reader
A sun-synchronous orbit results from a combination of orbital period and inclination such that the
satellite keeps pace with the sun’s westward progress as the earth rotates. Thus, the satellite
always crosses the equator at precisely the same local sun time. A geostationary orbit is an
equatorial orbit that will produce an orbital period of exactly 24 hrs. A geostationary satellite,
thus, completes one orbit around the earth in the same amount of time needed for the earth to
rotate once about its axis and remains in a constant relative position over the equator.
Orbit configuration:
Geo-stationary orbit / geo-synchronous orbit:
Fig 2: The geostationary satellite orbits at the same rate as the earth, so it remains above a fixed
spot on the equator and monitors one area constantly
(Source: physics.uwstout.edu/wx/wxsat/measure.htm)
41
GIS Reader
Fig 3: The polar orbiting satellite scans from north to south, and on each successive orbit the
satellite scans a strip further to the west
(Source: physics.uwstout.edu/wx/wxsat/measure.htm)
There are limitations on sensor sizes and apertures that can be placed in space, due to the size and
weight of the payload on the satellite. These problems can be partially overcome using a circular
orbital configuration, with a high inclination. This is known as polar orbit, as it is based on over
flying the poles typically 14 times a day. Precise details of the final orbit configuration depend
upon another factor, the nodal crossing time. This is the point at which an Earth observation
satellite crosses the equator, either heading towards the North or South Pole. Preference for either
of these is determined by the particular requirements of the users for the viewing of the target, at
different sun angles, throughout the year. Additionally, the rotation of the earth underneath the
satellite, combined with natural small variations in the orbit, causes a different part of the earth’s
surface to be viewed on each orbit of the satellite. The orbit can be adjusted to ensure that it
exactly repeats a pass over the same location to study the temporal variations in, for e.g., a land
feature.
The Terra/Aqua satellites are polar orbiting satellites.
True polar orbits are preferred for missions whose aim is to view longitudinal zones under the full
range of illumination conditions.
Oblique orbiting (near polar orbit) satellites are the ones whose orbital planes cross the plane
of the equator at an angle other than 90°. Oblique orbiting satellites may be launched east wards
into direct or prograde orbits or westwards into retrograde orbits. Because the earth is not a
perfect sphere it exercises a gyroscopic influence on satellites in oblique planes such that those in
prograde orbits regress while retrograde orbits advance or precess with respect to the planes of
their initial orbits.
Fig 4: Near polar orbits: Prograde Fig 5: Near polar orbits: Retrograde
The orbital parts traced out by the satellite determine the revisit rate that can be achieved
for the particular ground station. The rate at which the satellite retraces a specific path determines
how frequently measurements can be taken where multi temporal studies are required. These
factors govern the rates at which a satellite will generate information for a data centre.
42
GIS Reader
Over time the daguerrotype process improved but was eventually replaced by newer and better
processes. In the United States, daguerrotype photographs were popularly called “tintypes.” By
1851, Scott Archer of England developed the process of coating glass plates with sensitized silver
compounds. The plates were referred to as “wet plates” and the process had reduced the exposure
time to one-tenth that of the daguerrotype process.
NADAR CARICATURIZED IN 1862
Once a technique was established for taking pictures, an adequate aerial platform was needed for
taking aerial photographs. The only platforms available at the time were balloons and kites. In
1858, Gaspard Felix Tournachon (later known as "Nadar") captured the first recorded aerial
photograph from a balloon tethered over the Bievre Valley. However, the results of his initial
work were apparently destroyed. On the other hand his early efforts were preserved in a caricature
43
GIS Reader
prepared by Honoré Daunier for the May 25, 1862 issue of Le Boulevard. Nadar continued his
various endeavors to improve and promote aerial photography. In 1859, he contacted the French
Military with respect to taking "military photos" for the French Army's campaign in Italy and
preparing maps from aerial photographs. In 1868 he ascended several hundred feet in a tethered
balloon to take oblique photographs of Paris.
On October 13, 1860, James Wallace Black, accompanied by Professor Sam King, ascended to an
altitude of 1200 feet in King's balloon and photographed portions of the city of Boston. A cable
held the balloon in place. Black, the photographer, made eight exposures of which only one
resulted in a reasonable picture. This is the oldest conserved aerial photograph. He worked under
difficult conditions with the balloon, which although tethered, was constantly moving. Combined
with the slow speed of the photographic materials being used it was hard to get a good exposure
without movement occurring. He also used wet plates and had to prepare them in the balloon
before each exposure. After descending to take on more supplies, King and Black went up again
with the idea of not only covering Boston but also recording the surrounding countryside.
However, they encountered other problems. As they rose, the hydrogen expanded causing the
neck of the balloon to open more. This resulted in the gas flowing down on their equipment and
turning the plates black and useless. In addition, the balloon took off and they landed in some
high bushes in Marshfield, Massachusetts, about thirty miles away from their beginning point. It
was obvious that the balloon possessed problems in being an aerial platform.
M. Arthur Batut took the first aerial photographs using a kite. It was taken over Labruguiere,
France in the late 1880s. The camera, attached directly to the kite, had an altimeter that encoded
the exposure altitude on the film allowing scaling of the image. A slow burning fuse, responding
to a rubber band-driven device, actuated the shutter within a few minutes after the kite was
launched. A small flag dropped once the shutter was released to indicate that it was time to bring
down the kite. Batut took his first aerial photograph in May 1888. However, due to the shutter
speed being too slow, the image was not very clear. After some modification to the thickness of
the rubber band a good shutter speed was obtained.
44
GIS Reader
In 1906, George R. Lawrence took oblique aerial pictures of San Francisco after the earthquake
and fires.
Using between nine and seventeen large kites to lift a huge camera (49 pounds) he took some of
the largest exposures (about 48 x 122 cm or 18 x 48 in.) ever obtained from an aerial platform.
His camera was designed so that the film plate curved in back and the lens fitted low on the front,
providing panorama images. The camera was lifted to a height of approximately 2,000 feet and an
electric wire controlled the shutter to produce a negative. Lawrence designed his own large-
format cameras and specialized in aerial views.
He used ladders or high towers to photograph from above. In 1901 he shot aerial photographs
from a cage attached to a balloon. One time, at more than 200 feet above Chicago, the cage tore
from the balloon, and Lawrence and his camera fell to the ground. Fortunately telephone and
telegraph wires broke his fall; he landed unharmed. He continued to use balloons until he
developed his method for taking aerial views with cameras suspended from unmanned kites, a
safer platform from his perspective. He developed a means of flying Conyne kites in trains and
keeping the camera steady under varying wind conditions. This system he named the 'Captive
Airship'.
45
GIS Reader
In order for the pigeons to carry such small cameras and take several pictures in one flight, a new
type film and a smaller camera system were needed. In the 1870s, George Eastman, born in the
rural community of Waterville in upstate New York, was an accountant in Rochester. After
working five years in a bank, he became bored with the monotony of the job. In 1878, he decided
to take a vacation to the island of Santo Domingo and re-evaluate his life. To record his trip he
acquired a wet-plate camera outfit. However, he found the camera and assorted darkroom
equipment to be cumbersome and bulky. He would need a small wagon to carry all of the
materials and equipment, an arrangement not suited for taking pictures on one's vacation. He soon
forgot about the trip to Santo Domingo and became intrigued with the idea of developing a better
film and camera system.
In 1879, Eastman discovered the formula for making a successful gelatin emulsion covered dry-
plate and built a machine for coating dry plates with the emulsion. These developments led to the
invention of rolled paper film. The resulting prints were sharp, clear and free from paper grain
distortion. In 1889, his company, Kodak, introduced flexible celluloid film and the popularity of
photography soared. He now needed a camera to take advantage of the new film. In 1900,
outfitted with a simple lens and the ability to handle rolled film, the one-dollar Kodak box
camera, called the Brownie, made Kodak and photography almost synonymous. Eastman had not
only revolutionized the field of photography but set the stage for new developments in the field of
aerial photography. His work was shortly followed in 1903 by the Wright Brothers' first
successful flight of a heavier-than-air aircraft. Another type of aerial platform was available.
At the beginning of World War I the military on both sides of the conflict saw the value of using
the airplane for reconnaissance work but did not fully appreciate the potential of aerial
photography. Initially, aerial observers, flying in two-seater airplanes with pilots, did aerial
reconnaissance by making sketch maps and verbally conveying conditions on the ground. They
reported on enemy positions, supplies, and movements; however, some observers tended to
exaggerate or misinterpret conditions. In some cases, their observations were based on looking at
the wrong army. From above, identifying one soldier from another was not easy. One time a
German observer indicated that an English unit was running around in great disarray and appeared
to be in a state of panic. The English were playing soccer.
Some English observers started using cameras to record enemy positions and found aerial
photography easier and more accurate than sketching and observing. The aerial observer became
the aerial photographer. Soon all of the nations involved in the conflict were using aerial
photography. The maps used by both sides in the Battle of Neuve-Chappelle in 1915 were
produced from aerial photographs. By the end of the war the Germans and the British were
46
GIS Reader
recording the entire front at least twice a day. Both countries possess up-to-date records of their
enemy's trench construction.
England estimated that its reconnaissance planes took one-half million photographs during the
war, and Germany calculated that if all of its aerial photographs were arranged side by side, they
would cover the country six times. The war brought major improvements in the quality of
cameras; photographs taken at 15,000 feet (4,572 mtrs) could be blown up to show footprints in
the mud.
By World War I the airplane had matured in its development to be used for aerial reconnaissance.
However, aerial photographs taken from planes were often highly distorted due to shutter speeds
being too slow in relationship to the speed of the plane. Toward the end of the war Sherman M.
Fairchild developed a camera with the shutter located inside the lens. This design significantly
reduced the distortion problem. In addition, the camera’s magazine would prevent uneven
spacing. Fairchild also designed an intervalometer that allowed photos to be taken at any interval.
Combined these developments made the Fairchild camera the best aerial camera system available.
With modifications, the Fairchild camera remained the desired aerial camera system for the next
fifty years.
In 1921, he took a series of 100 overlapping photographs and made an aerial map of Manhattan
Island.
This aerial map was his first real commercial success and it was used by several New York City
agencies and businesses. In 1922, Newyork, New Jersey contracted with him to map its bay area.
A Connecticut town discovered 1800 buildings not on its tax rolls using an aerial map, and
47
GIS Reader
another town, East Haven wanted to reassess its properties but discovered that to conduct a
ground survey would take five years and cost $80,000. The Canadian company, Laurentide Paper
and Pulp, hired him to survey the large, inaccessible forest regions of Canada. Within the first
year, 510 square miles were mapped. Fairchild was demonstrating that aerial photography had
many non-military uses and could be a successful venture commercially. By the mid-1930’s,
Fairchild Aerial Surveys was the largest and most commercially successful aerial photography
company in the United States.
Fairchild found it necessary to enter the field of manufacturing airplanes in order to have a good
solid aerial platform. The open-cockpit biplanes were totally unsatisfactory. He produced high-
wing cabin monoplanes. An enclosed, heated cabin protected the camera equipment as well as the
photographer and pilot from the weather elements. He now had three companies, one to produce
aerial cameras, another to conduct aerial surveys, and a final one to build planes suited to
undertake aerial photography. Fairchild’s brilliant camera designs and his strong commitment to
aerial photography brought aerial mapping to full maturity. Before his death in 1971, he saw his
cameras carried on Apollo 15, 16, and 17, and while astronauts explored the lunar surface, his
cameras mapped the moon.
In 1926, another platform was introduced for obtaining pictures of the Earth’s surface. In that
year Dr. Robert H. Goddard constructed and tested successfully the first rocket using liquid fuel.
The rocket was launched on March 16, 1926, at Auburn, Massachusetts. His second rocket was
also launched at Auburn in 1929 and it carried a scientific payload (a barometer and a camera).
The first picture from a rocket was taken during this launch.
Due to his lifetime of major accomplishments in the field of space technology, Goddard who died
in 1945 was honored in 1959 by receiving, posthumously, the Congressional Gold Medal. Also in
1959 in memory of his outstanding work, a major space science laboratory, NASA's Goddard
Space Flight Center, Greenbelt, Maryland, was established. Finally in 1959, Explorer Vl, under
Goddard project management, provided the World with its first image of Earth from space. In
1960, term “remote sensing” was coined.
48
GIS Reader
World War II brought about tremendous growth and recognition to the field of aerial photography
that continues to this day. In 1938, the chief of the German General Staff, General Werner von
Fritsch, stated, “The nation with the best photoreconnaissance will win the war.” By 1940,
Germany led the world in photoreconnaissance. However, after von Fritsch’s death the quality of
German photointelligence declined. When the United States entered the War in 1941, it basically
had no experience in military photointerpretation. By the end of the War, it had the best
photointerpretation capacity of any nation in the world. In 1945, Admiral J. F. Turner,
Commander of American Amphibious Forces in the Pacific, stated that, “Photographic
reconnaissance has been our main source of intelligence in the Pacific. Its importance cannot be
overemphasized.”
1950’S
During the 1950’s, aerial photography continued to evolve from work started during World War
II and the Korean War. Color-infrared became important in identifying different vegetation types
and detecting diseased and damaged vegetation. Multispectral imagery, that is images taken at the
same time but in different portions of the electromagnetic spectrum, was being tested for different
applications. Radar technology moved along two paralleling paths, side-looking air-borne radar
(SLAR) and synthetic aperature radar (SAR). Westinghouse and Texas Instruments did most of
this work for the United States Air Force.
1957
Russia launches Sputnik, the first satellite, marking the beginning of satellite imagery
1970s
First of the Landsat satellites was launched by NASA in 1972. The Landsat program in the '70s
and '80s began selling satellite imagery commercially for the first time.
49
GIS Reader
Aerial photography is the art of taking photograph of any feature or phenomenon on the earth
surface from air-borne platforms with the help of a camera without coming in contact with that
particular object. Aerial photography most commonly used by military personnel may be divided
into two major types, the vertical and the oblique. Each type depends upon the attitude of the
camera with respect to the earth's surface when the photograph is taken.
Aerial photographs have the advantage of providing us with synoptic views of large areas. This
characteristic also allows us to examine and interpret objects simultaneously on large areas and
determine their spatial relationships, which is not possible from the ground. Aerial photographs
are also cost effective in interpreting and managing natural resources. They have played a
significant role in map making and data analysis.
Classification of photographs:-
A number of systems have been used to classify photographs. The most common system is the
one that separates photographs into terrestrial and aerial (Figure 1).
A vertical photograph is taken with the camera pointed as straight down as possible (Figures 2).
Allowable tolerance is usually + 3° from the perpendicular (plumb) line to the camera axis. The
result is coincident with the camera axis. A vertical photograph has the following characteristics:
- 50 -
GIS Reader
This type of aerial photography is taken when the axis of the camera is tilted with the subject of
the photography. It makes an angle with the feature. Depending on the angle it can be divided into
following groups:
Low Oblique. This is a photograph taken with the camera inclined about 30° from the vertical
(Figure 3). It is used to study an area before an attack, to substitute for a reconnaissance, to
substitute for a map, or to supplement a map. A low oblique has the following characteristics:
- 51 -
GIS Reader
High Oblique. The high oblique is a photograph taken with the camera inclined about
60° from the vertical (Figures 4). It has a limited military application; it is used primarily
in the making of aeronautical charts. However, it may be the only photography available.
A high oblique has the following characteristics:
1. Vertical photographs present approximately uniform scale throughout the photo but not
oblique photos. It follows that making measurements (e.g., distances and directions) on
vertical photographs is easier and more accurate.
2. Because of a constant scale throughout a vertical photograph, the determination of
directions (i.e., bearing or azimuth) can be performed in the same manner as a map. This
is not true for an oblique photo because of the distortions.
3. Because of a constant scale, vertical photographs are easier to interpret than oblique
photographs.
4. Vertical photographs are simple to use photogrammetrically as a minimum of
mathematical correction is required.
5. To some extent and under certain conditions (e.g., flat terrain), a vertical aerial
photograph may be used as a map if a coordinate grid system and legend information are
added.
6. Stereoscopic study is also more effective on vertical than on oblique photographs.
- 52 -
GIS Reader
1. An oblique photograph covers much more ground area than a vertical photo taken from
the same altitude and with the same focal length.
2. If an area is frequently covered by cloud layer, it may be too low and/or impossible to
take vertical photographs, but there may be enough clearance for oblique coverage.
3. Oblique photos have a more natural view because we are accustomed to seeing the
ground features obliquely. For example, tall objects such as bridges, buildings, towers,
trees, etc. will be more recognizable because the silhouettes of these objects are visible.
4. Objects that are under trees or under other tall objects may not be visible on vertical
photos if they are viewed from above. Also some objects, such as ridges, cliffs, caves,
etc., may not show on a vertical photograph if they are directly beneath the camera.
5. Determination of feature elevations is more accurate using oblique photograph than
vertical aerial photographs.
6. Because oblique aerial photos are not used for photogrammetric and precision purposes,
they may use inexpensive cameras.
Depending on the object where the camera is mounted aerial photography can be divided
into:-
• Balloon Aerial Photography: In this case the camera is mounted on the balloon. This is
the earliest form of aerial photography used in 1958.
• Kite Aerial Photography: In this case the camera is mounted on a kite.
• Mast Aerial Photography: The mast is used as the main object on which the camera is
mounted. It is fixed on a vehicle which takes the mast to the desired places on the
instruction of the photographer.
Depending on the type of camera used Aerial Photography can be divided into:-
• Multiple Lens Photography:- These are composite photographs taken with one camera
having two or more lenses, or by two or more cameras. The photographs are
combinations of two, four, or eight obliques around a vertical. The obliques are rectified
to permit assembly as verticals on a common plane.
• Convergent Photography:- These are done with a single twin-lens, wide-angle camera,
or with two single-lens, wide-angle cameras coupled rigidly in the same mount so that
each camera axis converges when intentionally tilted a prescribed amount (usually 15 or
20°) from the vertical.
• Panoramic:-The development and increasing use of panoramic photography in aerial
reconnaissance has resulted from the need to cover in greater detail more and more areas
of the world. A panoramic camera is a scanning type of camera that sweeps the terrain of
interest from side to side across the direction of flight. This permits the panoramic camera
to record a much wider area of ground than either frame or strip cameras.
Three terms need defining here, they are Principal Point, Nadir and Isocenter. They are defined
as follows:
1. Principal Point - The principal point is the point where the perpendicular projected through
the center of the lens intersects the photo image.
2. Nadir - The Nadir is the point vertically beneath the camera center at the time of exposure.
- 53 -
GIS Reader
3. Isocenter - The point on the photo that falls on a line half- way between the principal point
and the Nadir point.
These points are important because certain types of displacement and distortion radiate from
them. It is the Isocenter of the aerial photo from which tilt displacement radiates. It is Nadir from
which topographic displacement radiates.
Aerial photographs are created using a central or perspective projection. Therefore, the relative
position and geometry of the objects depicted depends upon the location from which the photo
was taken. Now because of this we get certain forms of distortion and displacement in Air
Photos.
Distortion - Shift in the location of an object that changes the perspective characteristics of the
photo.
Displacement - shift in the location of an object in a photo that does not change the perspective
characteristics of the photo (The fiducial distance between an object's image and it's true plan
position which is caused by change in elevation.)
Both distortion and displacement cause changes in the apparent location of objects in photos. The
distinction between the types of effects caused lies in the nature of the changes in the photos.
These types of phenomena are most evident in terrain with high local relief or significant vertical
features.
Three main types of problems/effects caused by specific types of distortion and displacement are:
• Lens distortion - Small effects due to the flaws in the optical components (i.e. lens) of
camera systems leading to distortions (which are typically more serious at the edges of
photos).
• Tilt Displacement - This type of displacement typically occurs along the axis of the wings
or the flight line. Tilt displacement radiates from the isocenter of the photo and causes
objects to be displaced radially towards the isocenter on the upper side of the tilted photo
and radially outward on the lower side.
- 54 -
GIS Reader
• Topographic Displacement - This is typically the most serious type of displacement. This
displacement radiates outward from Nadir. Topographic displacement is caused by the
perspective geometry of the camera and the terrain at varying elevations.
In order to attain perfection and to capture the entire area, some parts are taken repeatedly by the
camera. This repeatation creates overlapping photographs which is 60%-70%. When two flights
move side by side taking picture of same area or when the same flight moves in the same area,
sidelap occurs in the photograph which is 30%-40%.
Scale is one of the most important information for the usage of an aerial photograph or a map.
Quantitative measurements and interpretation of features on a photograph are highly dependent
upon this information. Scale is what determines the relationship between the objects imaged on a
photograph and their correspondings in the real world (i.e., the ground). The scale of a
photograph is defined as the ratio of the distance measured between any two points on the
photograph (or a map) to the distance between the same two points on the ground.
Representative fraction (or ratio) is the fraction of a distance measured between two points on a
photograph to the distance measured between the same two points on the ground. It can be
expressed as 1/20000 or as 1:20000.
Unit equivalents, also called equivalent scale, expresses the equivalence of a distance measured
between two points in photographic units to the distance between the same two points in ground
units. For example, a PS of 1:20000 would be expressed as 1 mm = 20 m (or 1 cm = 200 m or 1
inch = 258 ft), meaning that a distance of 1 mm on a photograph is equivalent to 20 m on the
ground (or 1 cm is equivalent to 200 m on the ground).
Photo scale reciprocal (PSR) is simply the inverse of the representative fraction. For example, an
RF of 1:20000 would correspond to a PSR of 20000
Types of Scale:-
- 55 -
GIS Reader
Point scale: It is the scale at a point with a specific elevation on the ground. This suggests that
every point on a vertical photograph at a different elevation will have a different scale.
where:
Average Scale:Unlike point scale, which is specific to a single point on the ground, average scale
may be determined for the entire project area, a set of photographs, a single photograph, a portion
of a photograph, or between two points on a photograph
where:
PSav is the average scale of the area considered (project, set of photographs, etc.),
hav is the average elevation of the area, and
H-hav = Hav is the flying height of the aircraft above the average elevation of the area.
- 56 -
GIS Reader
• Focal length
• Topography
• Tilt
• Flying height
- 57 -
GIS Reader
11 AREAL PHTOGRAMMETRY
Aerial photogrammetry: Image parallax, Parallax measurement and Relief displacement:-
Image Parallax: The term Parallax is refer to the apparent change in the relative positions of
stationary objects caused by a change in viewing position. Simply put, it is the shift of an object
against a background caused by a change in observer position. If there is no parallax between two
objects then they are side by side at the exact same height.
This parallax is often thought of as the 'apparent motion' of an object against a distant background
because of a perspective shift, as seen in Figure 1. When viewed from Viewpoint A, the object
appears to be closer to the blue square. When the viewpoint is changed to Viewpoint B, the object
appears to have moved in front of the red square.
58
GIS Reader
The figure 2 shows images made up of points with various parallax values. The lines of sight of
the eyes, correspond to the optical axes of their lenses, and their distance apart is called the
interpupillary separation.
In FIG. 2A the left and right image points are shown to correspond. By definition, this
condition is known as “zero parallax,” and such a point will appear at the plane of the screen. As
shown, the eyes are inwardly converged to fuse the superimposed corresponding (homologous)
left and right points.
With regard to FIG. 2B, note that the homologous points are separated by a distance given
by the arrowed line whose length is the same as the interpupillary separation. In other words, the
parallax value of these points is equal to the interpupillary separation. In such a case, the lines of
sight of the left and right eyes are parallel.
FIG. 2D is similar to FIG. 2B, except that the homologous points are separated by a
distance that is greater than the interpupillary separation. The lines of sight diverge, and this case
is known as divergence.
Parallax Measurement:-
Where d is the distance and p is the parallax. The approximation is far more accurate for
relatively small values of the parallax error when compared to the parallax.
59
GIS Reader
Relief Displacement:-
Because an aerial photograph is a central projection, all elevation and depression will
have their images displaced from their original position on the ground except the objects at the
nadir point (n) or principal point (pp) in vertical aerial photographs. Relief displacement is the
position of a point on the photograph if it were on the reference plane and its actual position due
to relief.
Relief displacement Δr is proportion to distance from the nadir point and with ratio height
difference Δz over flying height Zm. In a tilted photograph relief displacement is radial from the
nadir point.
Relief (Topographic)
Displacement
60
GIS Reader
61
GIS Reader
The general purposes of aerial photo interpretation are viewing photographs, making
measurements on photographs, and transferring interpreted information to base maps or digital
data bases. The aerial photo interpretation involves the stereoscopic viewing to provide a three
dimensional view of the terrain. This is possible because of the binocular vision of the human
eyes.
Well improved automatic instruments like radial line plotters A-7, A-8 can be used for the
interpretation and preparation of maps. But these equipments are not within the reach of the
individuals or general laboratories and moreover the techniques are also very difficult. So
generally use equipments are stereoscopes and sketch masters since they are less expensive and
simple.
Stereoscopes facilitate stereo viewing process. People having weak eyesight in one eye may not
have the ability to see in stereo. People with monocular vision can become proficient photo
interpreters. Several types of stereoscopes like lens stereoscopes, mirror stereoscopes, zoom
stereoscopes are available.
Lens stereoscope
A lens stereoscope comprises of two lenses placed at the same plane which are generally attached
to two rectangular metallic frames. It is portable and comparatively inexpensive. In this, it is
assumed that the distance between the two eyes of a man is approximately 65mm. With the help
of two legs; we can place it on the table. Then the lenses will be 100mm above the plane of the
table. The photo graphs will be magnified two and half times.
Below the instrument, two three dimensional photos are placed and the distance between the two
is so adjusted that both the images of the same point are visible at one point. Then an imaginary
three dimensional model of the visible landscape is found. Thus we can visualize the real picture
of the same part of the land on a small scale.
62
GIS Reader
The figures given below can be used to test stereoscopic vision. When this diagram is viewed
through a stereoscope, the rings and other objects should appear to be at varying distances from
the observer.
The principal disadvantage of small lens stereoscopes is the photographs must be quite close
together. So the interpreter can not view the entire stereoscopic area of 240mm aerial photographs
without raising the edge of one of the photographs.
Mirror stereoscopes have a combination of prisms and mirrors. So it helps to separate the lines
of sight from each of the viewer’s eyes. It uses little or no magnification. So the interpreter can
view all or most of the stereoscopic portion of a 240mm stereopair without moving either the
photographs or the stereoscope. The principal disadvantage of mirror stereoscope is that it is too
large. So it is not portable and it is more costly than simple lens stereoscopes.
Mirror stereoscopes
Scanning mirror stereoscopes are the improved form of mirror stereoscopes. It has two
binoculars attached with it. It can be used with 1.5 or 4.5 power magnification. It has a built-in
provision for moving the field of view across the entire stereo overlap area of the photographs or
the stereoscope. It facilitates two persons to view the same aerial photographs simultaneously.
63
GIS Reader
Zoom stereoscopes has a continuously variable magnification of 2.5 to 10 power. They are
expensive precision instruments, typically with a very high resolution. The image in each
eyepiece can be optically rotated through 360’ to accommodate uncut rolls of film taken under
conditions of aircraft.
Zoom stereoscope
Either paper prints or film transparencies can be viewed using a stereoscope. Paper prints are
more convenient to handle, more easily annotated, and better suited to field use. An interpreter
would generally use a simple lens or mirror stereoscope with paper units.
A more elaborate viewer like zoom stereoscope can be used with colour and colour infrared film
transparencies.
Transparencies are placed
on a light table for viewing
since the light source must
come from behind the
transparency.
64
GIS Reader
The task of taking distance measurements from aerial photographs can be performed using many
measurement devices. They differ in their cost, accuracy and availability.
Parallax bar or Stereo micro meter is used with the help of mirror stereoscope. It is metallic
micro scale constructed on a bar with graduated scale. A graduated screw is fixed at the end. Its
circulation rotation is related to the graduated scale on the bar in fixed ratio. One transparent glass
plate is fixed at each end. At the bottom of it, floating marks are there. If the difference of height
between two points is to be determined, then the first attempt is to fuse the double three
dimensional image into one by properly setting the instrument. So the three dimensional image
can be visible.
An engineer’s scale or metric scale is often adequate. In addition to measuring distances, areas
are often measured on a photograph. Accurate area measurements can be made from maps
generated from airphotos in stereo plotters or orthophotoscopes.
As the interpreter traces around the boundary of an area in clockwise direction, polar plani meter
mechanically computes areas. Areas can be determined most rapidly and accurately using
electronic coordinate digitizer. Areas can be determined using a digitizing tablet interfaced with
a microcomputer.
After interpretation, the data can be transferred to a base map. When the scale of the base map and
photograph are not of same scale, special optical devices can be used for the transfer process. By
adjusting the magnification of the two views, photo can be matched to the scale of the map.
Bausch and Lomb Zoom Transfer Scope allow the operator to view both a map and a pair of
stereo photographs. This can accommodate a wide disparity of photo and map scales. The colour
additive viewer is also photo interpretation equipment. This device color codes and super imposes
three mutispectral photographs to generate a more interpretable color composite. Most color
additive viewers are monoscopic. A few are equipped for stereoscopic viewing.
The sketch master is used for mapping and in delineation of landscapic feature on available
topographic maps. It contains a metallic stand with graduated scale fixed to any geometrically
shaped metallic piece. A metal piece is attached to the stand by an adjustable screw. A
horizontally fixed arm is there on this metal piece. So it can be moved forward and backward.
A metal piece is attached on the other side. The photographs can be fixed on this with the
help of magnetic metallic weights. This is adjustable. At the end of the horizontal arm,
another horizontal bar is attached and a double prism is attached to this. Thus the viewer
can see the images of the air photo and the map or sketch placed below the prisms. So it is
possible to construct the map with the help of adjustment of control points.
65
GIS Reader
Satellite remote sensing data in general and digital data in particular have been used as basic
inputs for the inventory and mapping of natural resources of the earth surface like agriculture,
soils, forestry, and geology. The central idea behind digital image processing is that, the digital
image is fed into a computer, one pixel at a time, called Look-Up-Table (LUT) values for a new
image. Virtually, all the procedures may be grouped into one or more of the following broad types
of operations –
(1) Pre- Processing
(2) Image Registration
(3) Image enhancement
(4) Image filtering
(5) Image transforms
(6) Image classification
Preprocessing
This correction method involves the data radio metrically and to eliminate the noise present
in the data. All pre-processing methods are considered under three heads, namely,
(1) Geometric correction methods,
(2) Radiometric correction methods,
(3) Atmospheric correction methods.
66
GIS Reader
To correct sensor data, both internal and external errors must be determined and be either
predictable or measurable, internal errors due to sensor effects, being systematic or
stationary, or, constant for all practical purposes.
Cosmetic operations
The first is the correction of digital images containing either partially or entirely missing
such lines. The second is the correction of images because of restriping of the imagery. This
means sometimes detector recorded irradiance for the same object may differ & second
phenomenon is called line drop.
67
GIS Reader
Image registration
Image registration is the translation and rotation alignment process by which two images/
maps of like geometrics and of like geometries and of the objects are positioned coincident
with respect to one another so that corresponding elements of the same ground area appear in
the same place on the registered images. Rectification is the process by which the geometry of
an image area is made planimetric. Whenever accurate data, direction, and distance
measurements are required, geometric rectification is required.
Image enhancement
The aim of digital enhancement is to amplify these slight differences for better clarity of the
image scene. This means digital enhancement increases the separability (contrast) between the
interested classes or features. The digital image enhancement may be defined as some
mathematical operation that are to be applied to digital remote sensing input data to improve
the visual appearance of an image for better interpretability or subsequent digital analysis. The
common problems that can be remove by image enhancement-
(1) Low sensitivity of detectors,
(2) Weak signal of objects present on earth surface,
(3) Similar reflection of different objects,
(4) Environment condition at the time of recording, and
(5) Human eye is poor at discriminating slight radiometric &spectral differences.
Image filtering
A characteristics of remotely sensed is a parameter called spatial frequency, defined
as the number of changes in brightness values per unit distance for any particular part of image. If
the brightness values changes dramatically over very short distances, this called high frequency
area. Algorithms which perform image enhancement are called “Filters” because they suppress
certain frequencies and pass (emphasis) others. Filters that pass high frequencies while
emphasizing final detail and edges called high frequency filters, and filters that pass low
frequencies called low frequency filters.
Image transformation
All the transformation of image processing of remotely sensed data allow the
generation of a new image, based on the arithmetic operations, mathematical statistics and Fourier
transformations.
68
GIS Reader
Introduction
Remotely sensed raw data, received from imaging sensor mounted on satellite platforms
generally contain flaws and deficiencies. The correction of deficiences and removal of flaws
present in the data through some methods are termed as pre-processing methods. This correction
model involves the initial processing of raw image data to correct geometric distortions, to
calibrate the data radiometrically and to eliminate the noise present correction methods.
• Internal distortion
• External distortions
Externaldistortions resulting from the attitude of the sensor or the shape of the object.
69
GIS Reader
attitude, ground control points, atmospheric condition etc.The transformation of a remotely sensed
image into a map with a with a scale and projection properties is called geometric correction.
Geometric Correction of remotely sensed images is required when the image r product
derived from the image such as a vegetation index or a classified image is to be used in one of the
folowing circumstances
• to transform an image to match a map projection
• to locate points of interest on map and image
• to bring adjacent images into registration
• to overlay temporal sequences of images of the same area perhaps acquired by different
sensors
• to overlay images and maps witnin GIS
• to integrate remote sensing data with GIS.
To correct sensor data, both internal and external errors must be determined and be either
predictable or measurable. Internal errors are due to sensor effects, being systematic or stationary,
or, constant for all practical purposes. External errors are due to platform perturbations and scene
characteristics, which are variable in nature and can be determined from ground control and
tracking data.
Geometric correction
The steps to follow for geometric correction are as follows
1) Selection of method
After consideration of the characteristics of the geometric distortion as well as the available
reference data, a proper method should be selected.
2) Determination of parameters
Unknown parameters which define the mathematical equation between the image coordinate
70
GIS Reader
system and the geographic coordinate system should be determined with calibration data and/or
ground control points.
3) Accuracy check
Accuracy of the geometric correction should be checked and verified. If the accuracy does not
meet the criteria, the method or the data used should be checked and corrected in order to avoid
the errors.
a. Systematic correction
When the geometric reference data or the geometry of sensor are given or measured, the
geometric distortion can be theoretically or systematically avoided. For example, the geometry of
a lens camera is given by the collinearity equation with calibrated focal length, parameters of lens
distortions, coordinates of fiducial marks etc. The tangent correction for an optical mechanical
scanner is a type of system correction. Generally systematic correction is sufficient to remove all
errors.
b. Non-systematic correction
Polynomials to transform from a geographic coordinate system to an image coordinate system, or
vice versa, will be determined with given coordinates of ground control points using the least
square method. The accuracy depends on the order of the polynomials, and the number and
distribution of ground control points
c. Combined method
Firstly the systematic correction is applied, then the residual errors will be reduced using lower
order polynomials. Usually the goal of geometric correction is to obtain an error within plus or
minus one pixel of its true position
(1) Radiometric
correction of
effects due to
sensor
sensitivity
In the case of optical sensors, with the use of a lens, a fringe area in the corners will be darker as
compared with the central area. This is called vignetting. Vignetting can be expressed by cos ,
where is the angle of a ray with respect to the optical axis. n is dependent on the lens
characteristics, though n is usually taken as 4. In the case of electro-optical sensors, measured
calibration data between irradiance and the sensor output signal, can be used for radiometric
correction.
71
GIS Reader
a. Sun spot
The solar radiation will be reflected diffusely onto the ground surface, which results in lighter
areas in an image. It is called a sun spot. The sun spot together with vignetting effects can be
corrected by estimating a shading curve which is determined by Fourier analysis to extract a low
frequency component.
b.Shading
The shading effect due to topographic relief can be corrected using the angle between the solar
radiation direction and the normal vector to the ground surface.
Image noise is any unwanted disturbance in image data that is due to limitations in the
sensing and data recording process. The random noise problems in digital data are charectorised
by nonsystematic variations in gray levels from pixel to pixel called bit errors. Such a noise is
often referred to as being 'spiky' in character and it causes images to have a 'salt and pepper' or
snowy appearances Bit errors are handled by recognizing that noise values normally change much
more abruptly than true image values. Thus, noise can be identified by comparing each pixel in an
image with its neighbours. If the difference between a given pixel value and its surrounding
values exceeds an analyst specified threshold the pixel is assumed to contain noise. The noisy
pixel value can then be replaced by the average of its neighbouring values. Moving window 3 x 3
or 5 x 5 pixel are typically used in such procedures. moving window concept basically involves
a) projection of 3 x 3 pixel window in image being processed; (a) movement of window from line
to line.
72
GIS Reader
The radiance (reflected/emitted) of features on the ground which is converted into digital images
gets degraded due to low sensitivity of the detectors, weak signal of the objects present on the
earth surface, similar reflectance of different objects and environmental conditions at the time of
recording. This creates a low contrast image whose features cannot be easily characterized by the
human eye.
Image enhancement techniques are used to manipulate the visual appearance of a digital image
for better interpretation by improving the information content of the image in the following ways.
• Contrast Enhancement
• Intensity, Hue and saturation transformations
• Density Slicing
• Edge Enhancement
• Making Digital Mosaics
• Producing synthetic stereo images
Digital image enhancement can be done by improving the remote sensing input data of the image
using various mathematical operators. These techniques can be broadly classified into two
• Point operators
• Local operators
Point operations modify the values of each pixel in an image data set independently, whereas
local operations modify the values of each pixel in the context of the pixel values surrounding it.
Contrast Enhancement is an example of point operations and spatial filtering for local operations.
Contrast Enhancement
A remote sensing system, i.e., sensors mounted on board aircraft and satellites, should be capable
of imaging a wide range of scenes, from very low radiance (oceans, low solar elevation angles,
high altitudes) to very high radiance (snow, ice, sand, low altitudes). For any particular area that is
imaged, the sensor’s range must be set to accommodate a large range of scene radiance and have
as many bits/pixels as possible over this range for precise measurements. However, the full range,
which is typically eight bits/pixel or more, is not used up. When such a scene is imaged and
converted to DNs (Digital Number) and displayed on a black and white monitor which uses eight
bit/pixels in each color, it will appear dull and lacking in contrast because it is not using the full
range available in the display.
For example, In Fig.2, the histogram of image A shows the number of pixels that respond to each
DN. The central 92% of the histogram has a range of DNs from 49 to 106, which utilizes only
23% of the available brightness range. This limited range of brightness values accounts for the
low contrast ratio of the original image.
76
GIS Reader
The aim of contrast enhancement is to expand the range of the original DN data to fill the
available display GL (Grey Level) range and thereby enhance the contrast of the digital image.
This transformation is called a contrast stretch.
Linear Enhancement
Linear contrast stretch is one of the simplest enhancement techniques which are used to improve
the contrast of the image. This technique is used to expand the image DN range to full range of
the display device (0-255), which is the range of values that can be represented in an 8 bit display
device). This procedure is also called min-max stretch (graph 1).
A DN (Digital Number) value in the low end of the original histogram is assigned to extreme
black, and a value at the high end is assigned to extreme white. In this example, (Fig 2) the lower
4% of pixels (DN<49) are assigned to black, or DN=0, and the upper 4% (DN>106) are assigned
to white, or DN=255. The map of Fig 1 shows the different features for comparison. The
intermediate values are interpolated between 0 and 255 by following a linear relationship, as
given below
Y = a + bx
Where X and Y are the input gray value of any pixel and output gray value of the same pixel,
“a” and “b” are intercept and slope respectively.
To increase the contrast, a saturation stretch may be implemented with the linear stretch to these
pixels. Pixels with a DN outside the range are transformed to a GL of either 0 or 255. Typically
saturation (clipping) of 1%-2% pixels of the image is a safe level where there is no loss of image
structure due to saturation.
Linear transformation can also be used to decrease image contrast if the image DN range exceeds
that of the display. This situation occurs for radar imagery, some multi spectral image such as
AVHRR (10 bits/pixel), and most hyper spectral sensors (12 bits/pixel).
77
GIS Reader
Fig. 1: Location map for Landsat image of an area in the Northern Chile Bolivia
78
GIS Reader
B. Linear contrast Stretch with lower and upper four percent of pixels saturated to black
and white respectively.
C. Gaussian Stretch
Figure 2: Portion of Landsat MSS band-4 image of an area in the Northern Andes, Chile
and Bolivia.
Non-Linear Enhancement
Non-linear Contrast stretch is used when the image histogram is asymmetric and DN values
cannot be controlled by a simple linear transformation. This method is used to expand one portion
of the grey scale while compressing the other portion (graph 2). While spatial information is
79
GIS Reader
preserved, quantitative radiometric information can be lost. Examples of non linear stretch include
logarithmic stretch, exponential stretch, histogram equalization etc.
A - Original Histogram
B- Nonlinear Enhancement
X - Brightness levels
Y - Image area (pixels)
Non-linear logarithmic contrast enhancement is used to emphasize details in the darker regions of
the image by compressing the brightness values within an image. Here the output pixel grey
values (Yij) will be generated from input pixel grey values (Xij) following the logarithmic
expressions as follows.
Where “a” and “b” are determined by taking the maximum and minimum grey values of the input
image and the corresponding maximum and minimum values in the output image. The following
are the characteristics of logarithmic enhancement.
a) It makes low contrast more visible by enhancing low contrast edges.
b) It provides contrast signal to noise ratio.
C) It provides a more equal distribution of grey values.
d) It transforms multiplicative noise into additive noise.
Exponential contrast enhancement is used on the edges in an image is to compress low contrast
edges, while expanding high contrast edges. This highlights features having higher grey values
there by enhancing high range bright areas in an image. This technique produces less visible detail
than the original image and is of limited use for image enhancement. The grey values (Xij) in the
input image transforms to grey values (Yij) in the output image as follows.
Yij = a e (bXij) + c
Where a, b and c are constants, b is arbitrarily chosen between 0.01 and 0.1 to higher value of e.
Further, 'a' and 'b' scale the dynamic range of the grey value of the output image with 0 and 255.
Gaussian Stretch
A Gaussian stretch is used to enhance contrast within the tails of the histogram. This method is
called a Gaussian stretch because it involves the fitting of the observed histogram to a normal or
Gaussian histogram. A Gaussian or normal distribution is defined by
f (x) = Ce –σx²
C = (σ/π)0.5
80
GIS Reader
σ, the standard deviation range of x for which f (x) drops by a factor of e-5 or 0.607 of its max.
value and the Max. Value = 1/ (2a)0.5.
Thus, 60.7% of the values of a normally distributed variable lie within one standard deviation of
the mean. In this method of enhancement each pixel value of input image can be converted to the
LUT (Look Up Table) value based on the probability of each pixel value with respect to a class
following the Gaussian law. The normal distribution curve is shown in Graph 3 given below. In
both the cases of contrast enhancement based on histogram analysis of input image values, the
range of levels allocated to the output image exceeds the range of levels of pixel values in the
input image. This results in the overall brightening of the displayed image.
In the example, (Fig 2C) the different lava flows are distinguished, and some details within the
dry lake are emphasized. In this method, the enhancement occurs at the expense of contrast in the
middle grey range, the fracture pattern and some of the folds are suppressed in this image.
Density Slicing
Density Slicing is the mapping of a range of contiguous grey levels of a single band image to a
point in the RGB color cube. The DNs of a given band are "sliced" into distinct classes. For
example, for band 4 of a TM 8 bit image, we might divide the 0-255 continuous range into
discrete intervals of 0-63, 64-127, 128-191 and 192-255. These four classes are displayed as four
different grey levels. This kind of density slicing is often used in displaying temperature maps.
81
GIS Reader
As an image enhancement technique often drastically alters the original numeric data, it is
normally used only for visual (manual) interpretation and not for further numeric analysis.
Common enhancements include image reduction, image rectification, image magnification,
transect extraction, contrast adjustments, band rationing, spatial filtering, Fourier
transformations, principal component analysis and texture transformation.
ImageEnhancementTechniques:-
Image Enhancement techniques are instigated for making satellite imageries more informative
and helping to achieve the goal of image interpretation. The term enhancement is used to mean
the alteration of the appearance of an image in such a way that the information contained in that
image is more readily interpreted visually in terms of a particular need. The image enhancement
techniques are applied either to single-band images or separately to the individual bands of a
multiband image set. These techniques can be categorized into two:
Density Slicing
Density Slicing is the mapping of a range of contiguous grey levels of a single band image to a
point in the RGB color cube. The DNs of a given band are "sliced" into distinct classes. For
example, for band 4 of a TM 8 bit image, we might divide the 0-255 continuous range into
discrete intervals of 0-63, 64-127, 128-191 and 192-255. These four classes are displayed as four
different grey levels. This kind of density slicing is often used in displaying temperature maps.
Contrast Stretching
The operating or dynamic, ranges of remote sensors are often designed with a variety of eventual
data applications. For example for any particular area that is being imaged it is unlikely that the
full dynamic range of sensor will be used and the corresponding image is dull and lacking in
contrast or over bright. Land sat TM images can end up being used to study deserts, ice sheets,
oceans, forests etc., requiring relatively low gain sensors to cope with the widely varying
radiances upwelling from dark, bright, hot and cold targets. Consequently, it is unlikely that the
full radiometric range of brand is utilized in an image of a particular area. The result is an image
lacking in contrast - but by remapping the DN distribution to the full display capabilities of an
image processing system, we can recover a beautiful image.
82
GIS Reader
This technique involves the translation of the image pixel values from the observed range DNmin
to DNmax to the full range of the display device (generally 0-255, which is the range of values
representable in an 8bit display devices) This technique can be applied to a single band, grey-
scale image, where the image data are mapped to the display via all three colors LUTs.
It is not necessary to stretch between DNmax and DNmin - Inflection points for a linear contrast
stretch from the 5th and 95th percentiles, or ± 2 standard deviations from the mean (for instance)
of the histogram, or to cover the class of land cover of interest (e.g. water at expense of land or
vice versa). It is also straightforward to have more than two inflection points in a linear stretch,
yielding a piecewise linear-stretch.
Histogram Equalization
Gaussian Stretch
This method of contrast enhancement is base upon the histogram of the pixel values is called a
Gaussian stretch because it involves the fitting of the observed histogram to a normal or Gaussian
histogram.
It is defined as follow:
F(x) = (a/p) 0.5 exp (-ax2)
The operations of addition, subtraction, multiplication and division are performed on two or more
co-registered images of the same geographical area. These techniques are applied to images from
separate spectral bands from single multispectral data set or they may be individual bands from
image data sets that have been collected at different dates. More complicated algebra is
sometimes encountered in derivation of sea-surface temperature from multispectral thermal
infrared data (so called split-window and multichannel techniques).
Addition of images is generally carried out to give dynamic range of image that equals the input
images.
Band Subtraction Operation on images is sometimes carried out to co-register scenes of the
same area acquired at different times for change detection
83
GIS Reader
Band rationing:-
Band Rationing or Division of images is probably the most common arithmetic operation that is
most widely applied to images in geological, ecological and agricultural applications of remote
sensing. Ratio Images are enhancements resulting from the division of DN values of one spectral
band by corresponding DN of another band. One instigation for this is to iron out differences in
scene illumination due to cloud or topographic shadow. Ratio images also bring out spectral
variation in different target materials. Multiple ratio images can be used to drive red, green and
blue monitor guns for color images. Interpretation of ratio images must consider that they are
"intensity blind", i.e., dissimilar materials with different absolute reflectance’s but similar relative
reflectance’s in the two or more utilized bands will look the same in the output image.
Spatial filtering:-
Spatial filtering is a “local” operation in that pixel values in an original image are modified on the
basis of grey levels of neighboring pixels.
#spatial filters emphasize or de-emphasize image data of various spatial frequencies. (Roughness
of tonal variations in an image)
# Spatial frequency is defined as the number of changes in brightness values per unit distance for
any particular part of image.
84
GIS Reader
• Low-pass filters:
• Emphasize large area changes in brightness.
• De-emphasize local detail.
• Reduce random noise.
Convolution Filters:-
Filtering methods exists is based upon the transformation of the image into its scale or spatial
frequency components using the Fourier transform. The spatial domain filters or the convolution
filters are generally classed as either high-pass (sharpening) or as low-pass (smoothing) filters.
Simply subtracting the low-frequency image resulting from a low pass filter from the original
image can enhance high spatial frequencies. High -frequency information allows us either to
isolate or to amplify the local detail. If the high-frequency detail is amplified by adding back to
the image some multiple of the high frequency component extracted by the filter, then the result is
asharper,de-blurredimage.
High-pass convolution filters can be designed by representing a PSF with positive centre weight
and negative surrounding weights. A typical 3x3 Laplacian filter has a kernal with a high central
85
GIS Reader
value, 0 at each corner, and -1 at the centre of each edge. Such filters can be biased in certain
directions for enhancement of edges.
A high-pass filtering can be performed simply based on the mathematical concepts of derivatives,
i.e., gradients in DN throughout the image. Since images are not continuous functions, calculus is
dispensed with and instead derivatives are estimated from the differences in the DN of adjacent
pixels in the x, y or diagonal directions. Directional first differencing aims at emphasizing edges
in image.
Edge enhancement:-
• Edge enhancement is a digital image processing filter that improves the apparent
sharpness of an image or video. The creation of bright and dark highlights on either side
of any line leaves the line looking more contrasted from a distance. The process is most
prevalent in the video field, appearing to some degree in the majority of TV broadcasts
and DVDs. Standard television sets' "sharpness" control is an example of edge
enhancement. It is also widely used in computer printers especially for font or/and
graphics to get a better printing quality.
• Edge enhancement is concerned with the linear features in images. Some linear features
occur as narrow lines against a background of contrasting brightness; others are the linear
contact between adjacent areas of different brightness. In all cases linear features are
formed by edges. Some edges are marked by pronounced differences in brightness and
are readily recognized.
• Edges are marked by subtle brightness differences that may be difficult to recognize.
Contrast enhancement may emphasize brightness differences associated with some linear
features.
• Edge-enhancement images attempt to preserve local contrast and low frequency
brightness information.
• High frequency component image is produced using the appropriate kernel size.
• All or a fraction of the grey level in each pixel is added back to high-frequency
component image.
• The composite image is contrast-stretched.
• Standard television sets’ “sharpness” control is an edge enhancement.
Digital filters have been developed specifically to enhance edges in images and fall in to two
categories: directional and non directional filters.
86
GIS Reader
This procedure utilizes the pixel by pixel spectral information as the basis for automated land
cover classification.
This procedure involves the categorization of image pixels on the basis of their spatial
relationship with pixels surrounding them. Aspects such as image texture, pixel proximity, feature
size, shape, directionality, repetition and context are covered in this procedure.
This procedure uses time as an aid in feature identification. Data is analyzed from imagery
recorded on different dates. This is particularly pertinent in the case of crop surveys as their
imagery will go through changes during the growing season.
These procedures can be combined when the need arises. Depending on the nature of the data
being analyzed, the computational resources available and the intended application of the
classified data, the approach or the procedure can be arrived at.
The two main approaches in multi-spectral classification activities can be identified as
Supervised classification
Unsupervised classification
In the case of supervised classification, the software system delineates specific land cover types
based on statistical characterization data drawn from known examples in the image (known as
training sites). With unsupervised classification, however, clustering software is used to uncover
the commonly occurring land cover types, with the analyst providing interpretations of those
cover types at a later stage.
When the accuracy and efficiency of the classification process needs to be improved, then aspects
of both supervised and unsupervised classification can be combined to arrive at a hybrid
classification procedure.
SUPERVISED CLASSIFICATION:
87
GIS Reader
1. Training stage:–
• The analyst identifies representative training areas and develops numerical descriptions of
the spectral signatures of each land cover type of interest in the scene. This is also called
signature analysis.
• The actual classification of multispectral image data is a highly automated process.
However, assembling the training data needed for classification requires close interaction
between the image analyst and the image data. It also requires substantial reference data
and a thorough knowledge of the geographic area to which the data apply. The quality of
the training process determines the success of the classification stage and thereby the
value of the information generated from the entire procedure.
• It is during the training stage that the location, size, shape and orientation of the several
points for each land cover class are determined.
• The training data must be representative and complete. This implies that the image
analyst must develop training statistics for all spectral classes constituting each
information class to be discriminated by the classifier. For example, an information class
such as agriculture will contain different crop types and each crop type might be
represented by several spectral classes. These spectral classes would arise from different
planting dates, soil moisture conditions, crop management practices, seed varieties and
several other factors and their combinations.
Each pixel in the image data set is categorized into the land cover class it mostly resembles. If the
pixel is insufficiently similar to any training data, it is usually labeled ‘unknown’. Classifiers are
the techniques used for making these decisions about the resemblances. There are three different
kinds of classifiers; hard, soft and hyper spectral.
Hard classifier:-
The distinguishing characteristic of hard classifiers is that they all make a definitive decision
about the land cover class to which any pixel belongs. IDRISI offers three supervised classifiers
in this group: Parallelepiped (PIPED), Minimum Distance to Means (MINDIST), and Maximum
Likelihood (MAXLIKE). They differ only in the manner in which they develop and use a
statistical characterization of the training site data. Of the three, the Maximum Likelihood
procedure is the most sophisticated, and is unquestionably the most widely used classifier in the
classification of remotely sensed imagery.
Soft classifier:-
Contrary to hard classifiers, soft classifiers do not make a definitive decision about the land cover
class to which each pixel belongs. Rather, they develop statements of the degree to which each
pixel belongs to each of the land cover classes being considered. Thus, for example, a soft
classifier might indicate that a pixel has a 0.72 probability of being forest, a 0.24 probability of
being pasture, and a 0.04 probability of being bare ground. A hard classifier would resolve this
uncertainty by concluding that the pixel was forest. However, a soft classifier makes this
uncertainty explicitly available, for any of a variety of reasons. For example, the analyst
might conclude that the uncertainty arises because the pixel contains more than one cover
type and could use the probabilities as indications of the relative proportion of each. This
is known as sub-pixel classification. Alternatively, the analyst may conclude that the
uncertainty arises because of unrepresentative training site data and therefore may wish to
88
GIS Reader
combine these probabilities with other evidence before hardening the decision to a final
conclusion.
The typical forms of output products are thematic maps, tables and digital data files,
which become input data for GIS.
The figure given below shows the flow of operations to be performed
89
GIS Reader
Unsupervised classification:
• This procedure examines the data and breaks it into the most prevalent natural spectral
groupings, or clusters, present in the data. The analyst then identifies these clusters as
land cover classes through a combination of familiarity with the region and ground truth
visits. The logic by which unsupervised classification works is known as cluster analysis.
• In contrast to supervised classification, where the system needs to be told about the
character (i.e., signature) of the information classes we are looking for, unsupervised
classification requires no advance information about the classes of interest. It is important
to recognize, however, that the clusters unsupervised classification produces are not
information classes, but spectral classes (i.e., they group together features (pixels) with
similar reflectance patterns). It is thus usually the case that the analyst needs to reclassify
spectral classes into information classes. For example, the system might identify classes
for asphalt and cement which the analyst might later group together, creating an
information class called pavement.
• Access to efficient hardware and software is an important factor in determining the ease
with which an unsupervised or supervised classification can be performed. The quality of
the classification will depend upon the analyst’s understanding of the concepts behind the
classifiers available and knowledge about the land cover types under analysis.
Hybrid classification:-
90
GIS Reader
Euclidean Distance
The distance between two points is the length of the path connecting them. In the plane, the
distance between points (x1, y1)and (x2,y2)is given by the Pythagorean theorem,
Mahalanobis distance
Given two "points" xl and x2 defined by numerical attributes (e.g., two observations), the
distance between these two points will be given by using the traditional euclidian distance:
Given multivariate normal distribution, one defmes the (square of the) Mahalanobis
distance of an observation X to the barycenter g of the distribution as follows:
Two observations sitting in regions with same density are at the same (Mahalanobis) distance
from the barycenter (although their euclidean distances from the barycenter may be quite
different). Points that are at a given mahalanobis distance from the barycenter sit on an ellipsoid
centered on the barycenter.
91
GIS Reader
92
GIS Reader
estimated by the NASS. Distance values from 43 through 111 are classified as likely to be corn
because the cumulative total of the acreage for these pixels constituted the remaining 25.0% of the
acreage estimated by NASS.
Distance values greater than 111 are classified as unlikely to be corn.
3. Results
Overall average accuracy (correctly classified samples for all classes
divided by total number of samples) was 92.2% with individual county accuracies ranging from
90.1 to96.5%.
4. Discussion
The results of our study indicate that an automated approach to classifying corn from Landsat
satellite imagery may be feasible. The primary advantage of this method is the ability to perform
rapid interpretation of the satellite imagery without the need for ground reference data to ‘train’
the classification algorithm. This is especially important in creating historical maps, because
ground reference data may not be available.
The correlation between strong ground motions and geology was identified in the mid-1800s (Del
Barrio, 1855; Mallet, 1862). Recent studies by Borcherdt (1994) and Anderson et al. (1996) have
quantified the influence of near-surface These studies suggest using geology as an initial regional
classification for seismic zonation. In this study, the use of remote sensing imagery for regional
classification is evaluated. In particular, the objective is to identify Holocene-age deposits that
may be susceptible to ground motion amplification. Site response is then determined for
Holocene-age and Pleistocene-age deposits in the Mississippi Embayment based on additional
subsurface information.
Holocene-age alluvial deposits in the floodplains are distinguished from loess deposits of
Pleistocene/Pliocene age in the inland, terrace regions based on spectral contrast and texture.
Agbu et al. (1990) observed that spectral reflectance is related to subsurface conditions since
subsurface conditions affect the properties observed at the surface. The variation in soil type,
moisture content, and geology influences the spectral reflectance and texture. Therefore, spectral
reflectance and texture are the basis for classification in this study.
Landsat TM Images
The Landsat Thematic mapper (TM) is a multispectral satellite measuring electromagnetic energy
in seven spectral bands ranging from the visible to the thermal infrared. Each pixel represents an
area 30 m by 30 m for six of the seven bands whereas pixels in the thermal infrared band
represent an area 120 m by 120 m. An image from the Landsat TM satellite was selected to
assess the feasibility of using satellite imagery for identifying regions susceptible to ground
motion amplification. In particular, imagery was analyzed to distinguish between Holocene-age
and Pleistocene-age deposits. Holocene-age deposits are susceptible to ground motion
amplification due to the loose, unconsolidated state of deposition. In the Central United States,
Holocene-age deposits are found throughout the floodplains of major rivers. Pleistocene-age
deposits are located in the upland, terrace regions. Analysis of imagery focused on
distinguishing between the two geologic deposits.
93
GIS Reader
The Landsat TM image was obtained from the USGS Earth Resources Observation Systems
(EROS) Data Center and georeferenced to the Universal Transverse Mercator (UTM) coordinate
system that is based on the North American Datum of 1927. The image was obtained on
November 22, 1986 from the Landsat TM 5 satellite launched in March 1984. Autumnal images
were selected due to the lack of vegetation cover allowing imaging of the surface geology. A
portion of the acquired image is shown in Figure 1.
Figure 1 Part of Landsat TM image acquired showing the Jackson Purchase area of western
Kentucky.
Study Area
The study area was selected to evaluate the use of Landsat TM imagery for regional seismic
zonation and is located northeast of the NMSZ. The study area is a subset of the area in Figure 1
and is located in the Jackson Purchase region of western Kentucky. The study area is bounded
by the Ohio River to the northwest and the Mississippi River to the southwest. Figure 2 shows
the selected study area including parts of Kentucky, Missouri, and Illinois and is composed of
1000 by
1000
pixels.
94
GIS Reader
Spectral Classification
The first approach to classification or segmentation is based on the pixel brightness values or
relative spectral reflectance of the image. Histogram equalization was used to enhance the
contrast in the image. The image in Figure 3 was then passed through a low-pass filter to reduce
the effect of cultural boundaries and agricultural features and enhance geologic features. The
image was then classified by image segmentation where low pixel values (dark pixels) were
labeled Holocene-age deposits and high pixel values (white pixels) were labeled Pleistocene-age
deposits. The result of this classification is shown in Figure 4.
Texture Classification
Texture is related to patterns in pixel brightness values. Several approaches have been applied to
quantify textural analysis including first-order and second-order statistics, directional filters, and
fractal geometry. First-order statistics include calculating the mean and standard deviation of a
pixel cluster. First-order statistics are used in this study to quantify texture and are described
below. The statistics of a 35 by 35 pixel neighborhood were compared with the mean of
identified Holocene-age and Pleistocene-age regions. The minimum Euclidean distance was used
to classify pixels. The results of the texture classification are shown in Figure 5.
95
GIS Reader
The intent of the classification process is to categorize all pixels in a digital image into one of
several land cover classes, or "themes". This categorized data may then be used to produce
thematic maps of the land cover present in an image. Normally, multispectral data are used to
perform the classification and, indeed, the spectral pattern present within the data for each pixel is
used as the numerical basis for categorization. The objective of image classification is to identify
and portray, as a unique gray level (or color), the features occurring in an image in terms of the
object or type of land cover these features actually represent on the ground.
Image classification is perhaps the most important part of digital image analysis. It is very nice to
have a "pretty picture" or an image, showing a magnitude of colors illustrating various features of
the underlying terrain, but it is quite useless unless to know what the colors mean. Two main
classification methods are Supervised Classification and Unsupervised Classification.
A set of values for a single pixel on each of a number of spectral bands, such as (30, 20, 12, 10},
is often referred to as a pattern. The characteristics or variables (such as Landsat-4 MSS bands 1,
2, 3 and 4) which define the basis of the pattern are called features. A pattern is thus a set of
measurements on the chosen features. Hence the classification process can be described as a form
of pattern recognition, or the identification of the pattern associated with each pixel position in an
image in terms of characteristics of the objects or materials at the corresponding point on the
Earth's surface. Pattern recognition methods have found widespread use in fields other than
environmental remote sensing; for example, military applications include the identification of
approaching aircraft and the detection of targets for cruise missiles. Robot or computer vision
involves the use of mathematical descriptions of objects "seen" by a television camera
representing the robot eye, and the comparison of these mathematical descriptions with patterns
describing objects in the real world. In every case, the crucial steps are (i) selection of the
particular features which best describe the pattern and (ii) choice of a suitable method for the
comparison of the pattern describing the object being classified and the target patterns. In remote
sensing applications it is usual to include a third stage, that of assessing the degree of accuracy of
the allocation process.
96
GIS Reader
Unsupervised classification
Unsupervised classification is a method which examines a large number of unknown pixels and
divides into a number of classed based on natural groupings present in the image values. unlike
supervised classification, unsupervised classification does not require analyst-specified training
data. The basic premise is that values within a given cover type should be close together in the
measurement space (i.e. have similar gray levels), whereas data in different classes should be
comparatively well separated (i.e. have very different gray levels). The classes that result from
unsupervised classification are spectral classed which based on natural groupings of the image
values, the identity of the spectral class will not be initially known, must compare classified data
to some form of reference data (such as larger scale imagery, maps, or site visits) to determine the
identity and informational values of the spectral classes. Thus, in the supervised approach, to
define useful information categories and then examine their spectral separability; in the
unsupervised approach the computer determines spectrally separable class, and then define their
information value.
97
GIS Reader
accuracy assessment procedures, this tool can provide a remarkably rapid means of producing
quality land cover data on a continuing basis.
Once a classification exercise has been carried out there is a need to determine the degree of error
in the end-product. These errors could be thought of as being due to incorrect labeling of the
pixels.
The basic idea is to compare the predicted classification (supervised or unsupervised) of each
pixel with the actual classification as discovered by ground truth.
The analyst selects a sample of pixels and then visits the sites (or vice-versa), and builds a
confusion matrix: (IDRISI module CONFUSE.). This is used to determine the nature and
frequency of errors.
cells of the matrix = count of the number of observations for each (ground, map) combination
diagonal elements = agreement between ground and map; ideal is a matrix with all zero off-
diagonals
errors of omission (map producer’s accuracy) = incorrect in column / total in column. Measures
how well the map maker was able to represent the ground features.
98
GIS Reader
errors of commission (map user’s accuracy) = incorrect in row / total in row. Measures how likely
the map user is to encounter correct information while using the map.
Statistical test of the classification accuracy for the whole map or individual cells is possible using
the kappa index of agreement. This is like a c ? test except that it accounts for chance agreement.
This method stands or falls by the availability of a test sample of pixels for each of the k
classes. The use of training-class pixels for this purpose is dubious—one cannot logically train
and test a procedure using the same data. A separate set of test pixels should therefore be used for
the calculation of classification accuracy. Users of the method should be cautious in interpreting
the results if the ground data from which the test pixels were identified were not collected on the
same date as the remotely-sensed image, for crops can be harvested or forests cleared. So far as
possible the test pixel labels should adequately represent reality.
99
GIS Reader
Remote sensing is defined as the science and art of acquiring information about material objects
without being in touch with them. These measurements are possible because sensors or
instruments are designed to measure the spectral reflectance of earth objects. It is discovered that
each earth cover has its own spectral reflectance characteristics. The characteristics are so
unique that they are called "signature" which enable us to discern the objects from its intermixed
background.
The final remote sensing process is completed by the analysis of the data using image
interpretation techniques. Some key elements, or cues from the imagery, such as shape, size,
pattern, tone, colour, shadow and association, are used to identify a variety of features on earth.
The techniques of remote sensing and image interpretation yield valuable information on earth
resources. The different image interpretation elements are discussed below,
Shape
It is the general form, configuration and outline of the feature. In case of stereoscopic photographs
the object height is also important which helps identify the shape of the object. The shape may not
be regular but it is very effective for image interpretation
Size
The size of an object in a photograph is determined by the scale of the photograph. The sizes of
different object in the same photograph help the interpreter to identify the object in many cases.
Pattern
It is the spatial arrangements of objects. The repetition of certain general forms of many natural or
constructed objects form the pattern that helps to recognize the photo. The pattern can be regular,
curvilinear or meandering. For example in case of a river it generally shows the meandering
pattern and the pattern of the agricultural land is regular in most of the cases.
Colour
The colour difference is very effective for identifying any object. For example the colour of river
water in an aerial photography appears as black or dark grey while a road emits white or light
grey colour.
Texture
It is the frequency of the tonal change on the image. It is basically the combination of shape, size,
pattern, shadow and tone. Texture is produced by adding all the unit features which may be too
small to identify individually in a photograph. For example, it’s very difficult to identify the
feature of each leaf in a tree or its shadow. The texture gives the overall visual smoothness or
coarseness of image features. The texture also varies with the scale. If we start reducing the scale
of any image after certain limit the texture of the object in the image will become progressively
finer and ultimately disappears. The object with similar reflectance can also be identified by its
100
GIS Reader
texture. For example, the green grass and the rough textured green tree can be easily distinguished
by its different texture.
Shadow
Shadows are important for two opposing reasons,
1. The shape of a shadow gives an impression of the profile view of the objects which aids
interpretation
2. Objects within shadows reflect little light and are difficult to discern on photographers
which hinders interpretation.
Shadows from subtle variations in terrain elevations, especially in the case of low sun angle
photographs, can aid in assessing natural topographic variations that may be diagnostic of various
geologic landforms.
Site
It means the geographic or topographic location. It is mostly important for identification of
vegetation types.
Association
It means the occurrence of any object in a photograph in relation to the other.
Referencing scheme which is unique for each IRS satellite mission is a means of conveniently
identifying the geographic location of points on the earth. This scheme is designated by Path and
Rows. The Path-Row concept is based on the nominal orbital characteristics.
Path
An orbit is the course of motion taken by the satellite in space and the ground trace of the orbit is
called a 'Path'. In a 24 day cycle, the satellite completes 341 orbits with an orbital period of
101.35 minutes. This way, the satellite completes approximately 14 orbits per day. Though the
number of orbits and paths are the same, the designated path number in the referencing scheme
and the orbit number are not the same. On day one (D1), the satellite covers orbit numbers 1 to
14, which as per the referencing scheme will be path numbers 1, 318, 294, 270, 246, 222, 198,
174, 150, 126, 102, 78, 54 and 30, assuming that the cycle starts with path 1. So orbit 1
corresponds to path 1, orbit 2 to path 318, orbit 3 to path 294 etc. The fifteenth orbit or first orbit
of day two (D2), is path 6 which will be to the east of path 1 and is separated from path 1 by 5
paths.
Path number one is assigned to the track which is at 29.7 deg West longitude. The gap between
successive paths is 1.055 deg. All subsequent orbits fall westward. Path 1 is so chosen, that the
pass with a maximum elevation greater than 86 deg for the data reception station of NRSA at
Shadnagar can be avoided. This is due to the limitation of antenna drive speed, since it is difficult
to track the satellite around zenith. In fact, above 86 deg elevation, if a pass occurs, the data may
be lost for a few seconds around zenith. Hence, the path pattern is chosen such, that the overhead
passes over the data reception station is reduced to a minimum. To achieve this, path 1 is
positioned in such a manner that the data reception station is exactly between two nominal paths,
namely 99 and 100. During operation, the actual path may vary from the nominal path pattern due
to variations in the orbit by perturbations. Therefore, the orbit is adjusted periodically, after
certain amount of drift, to bring the satellite into the specified orbit.
101
GIS Reader
The path pattern is controlled within ±5 km about the nominal path pattern. Due to this movement
of actual paths within ±5 km about the nominal path, it is not possible to totally avoid above 86
deg elevation passes for Hyderabad. However, with this approach, the number of passes above 86
deg elevation is reduced to almost one in a 24 days cycle.
Row
Along a path, the continuous stream of data is segmented into a number of scenes of convenient
size. While framing the scenes, the equator is taken as the reference line for segmentation. The
scenes are framed in such a manner that one the scenes' centre lies on the equator. For example, a
LISS-III scene, consisting of 6000 lines, is framed such that the centre of the scene lies on the
equator. The next scene is defined such that its centre lies exactly 5,703 lines from the equator.
The centre of next scene is then defined 5,703 lines northwards and so on. This is continued up to
81 deg North latitude. The lines joining the corresponding scene centers of different paths are
parallel to the equator and are called Rows. The uniformly separated scene centers are such that
same rows of different paths fall at the same latitude. The row number 1 falls around 81 deg
North latitude, row number 41 will be near 40 deg North and row number of the scene lying on
the equator is 75. The Indian region is covered by row numbers 30 to 90 and path numbers 65 to
130.
1. The Path-Row referencing scheme eliminates the usage of latitude and longitudes and
facilitates convenient and unique identification of a geographic location
2. Useful in preparing accession and product catalogues and reduces the complexity of data
products generation
3. Using the referencing scheme, the user can arrive at the number of scenes that covers his
area of interest. However, due to orbit and attitude variations during operation, the actual
scene may be displaced slightly from the nominal scene defined in the referencing
102
GIS Reader
scheme. Hence, if the user's area of interest lies in border region of any scene, the user
may have to order the overlapping scenes in addition to the nominal scene.
The referencing scheme of IRS-1C is different from that of IRS-1A/1B. In the IRS-1C referencing
scheme, the adjacent path occurs after five days and not on the next day as in the case IRS-1A/1B.
This type of referencing scheme has been chosen keeping in view the PAN sensor, so that the
revisit capability of 5 days can be met. The following table gives the major differences in terms of
referencing scheme pattern of IRS-1C from IRS-1A/1B.
IRS-1A/1B IRS-1C
Altitude 904 km 817 km
Repetivity 22 days 24 days
Consecutive path D + 1 day D + 5 days
Numbering of paths East to West West to East
Total number of orbits/cycle 307 341
IRS-1C and 1D have slightly different orbits and for this reason do not have the same reference
system.
The mean equatorial crossing time in the descending node is 0.30 a.m. ± 5 minutes. The orbit
adjust system is used to attain the required orbit initially and it is maintained throughout the
mission period. The ground trace pattern is controlled within ± 5 km of the reference ground trace
pattern.
103
GIS Reader
Spatial Analysis:
GIS is designed to support a range of different kinds of analysis of geographic information: techniques to
examine and explore data from a geographic perspective, to develop and test models, and to present data
in ways that lead to greater insight and understanding. All of these techniques fall under the general
umbrella of "spatial analysis"
• Using Spatial Analyst GIS users can create, query, map and analyze cell-based raster data, derive
new information from existing data.
• Information about geospatial data such as terrain analysis, spatial relationship and suitable
locations can be obtained using spatial Analyst. ArcGiS spatial Analyst integrates real-world
variables such as elevation into the geospatial environment to help solve complex problems.
• Arc GIS spatial Analyst bridges the gap between a simple map on a computer and real-world
analysis for deriving solutions to complex problems
• Data Integration: ArcGIS Spatial Analyst integrates the user's data enabling interaction between
data of many different types, images, elevation models and other raster surfaces can be combined
with CAD data, vector data internet data and many other formats to provide integrated analysis.
• Visualization: In addition to high-powered analysis and modeling, spatial Analyst also allows
analyst to visualize their data as never before. ArcGIS Spatial Analyst is integrated with Arc Map
so that the user can create stunning visual displays with the powerful symbology and annotations
options available.
• Sophisticated Raster Data Analysis: ArcGIS spatial analyst provides a robust environment for
advance raster data analysis. This environment enables density mapping, distance analysis,
surface analysis, grid statistics, spatial modeling and surface creation.
• Query: A key component of spatial Analyst is the ability to perform queries across different
raster data sets in the raster calculator. This allows the analyst to ask questions that cover level of
information. For example: What areas are zoned for residential development and have high water
table on a steep slope greater than 15%. The query functionality gives the analyst the ability to
leverage existing data and to make more informed decisions.
104
GIS Reader
• Terrain Analysis: With Spatial Analyst anyone can derive useful information such as hill shade,
contour slope, view shed or aspect map. The topographic surfaces give the user the power to
relate their data to the real world elevations and analyze.
• Spatial Modeling: ArcGIS Spatial Analyst provides the ability to create more sophisticated
spatial models for many different geospatial problems. Some of the process models of Spatial
Analyst include
Suitability Modeling: Most spatial models involve finding optimum locations such as finding the
best location to build a new school, landfill, or resettlement site.
Surface Modeling: What is ozone pollution level for various locations in a country?
ArcGIS Spatial Analyst provides a rich set of tools to perform cell-based (raster) analysis.
105
GIS Reader
Toolset Description
Conditional • The conditional tools allow for control of the output values based on the
conditions placed on the input values.
Conversion • When feature data is to be convened into raster data, or if raster data
needs to be converted into another format, Conversion tools are used.
106
GIS Reader
• those that change the geometry of the dataset through projections and
georeferencing (geometric transformation)
• those that change the orientation of the raster
• those that combine several adjacent raster into a single raster
Groundwater • The groundwater tools can be used to perform bas advection-
distribution modeling of constituents in groundwater.
Raster Creation • The Value Creation functions create new rasters in which the output
values are based on a constant or a statistical distribution.
Reclass • Reclassifying data simply means replacing input cell values with new
output cell values.
• There are many reasons to reclassify data. Some of the most common
reasons are, replace values based on new information, group certain
107
GIS Reader
• Patterns that were not readily clear in the original surface can be
derived, such as contours, angle of slope, steepest down slope
direction (aspect), shaded relief (hill shade), and view shed.
108
GIS Reader
Methods of interpolation:
• LINEAR INTERPOLATION
• NON LINEAR INTERPOLATION
Linear interpolation is the methods of assigning values between points of know elevation spread
over an area. We are looking at a single line transect of data point that range between 100 feet in
elevation and 150 feet in elevation. If we assume that the surface changes in a linear fashion, just as in
a simple series, and we have a linear progression, it is predetermining the number of sample data
points or selecting a certain number of points in quadrants or even octants.
Measure the distance between each pair of points and from every kernel or starting
points. The elevation values at each point are then weighted by the square of the distance so that
closer values will lend more weight to the calculation of the new elevation than closer distances.
There are many modifications of this approach some reduced the amount of the distance
calculations by employing a "learned search" approach, others modified the distance by weighting
factors other than the square. The barrier method is especially useful in the development of the
surface models that can account for these local objects. The interpolation cannot pass through the
barrier in its search for neighboring weights and distances. In general trends in Z surface rather
than in the exact modeling of individual undulations and minor surface changes. For example the
109
GIS Reader
Non linear interpolation techniques are designed to eliminate the assumption of linearity called
for in linear methods. There are three basic types of non-linear interpolation method:
• Weighting method
• Trend surfaces
• Kriging
Weighting methods assume that the closer together slope, sample values are, the more likely they
are to be affected by one another. For example, as we go up a hill, we note that there is much
greater similarity in the general trend in elevation values close to you than there would be if we
were to try to compare your local elevation to pint far away. Likewise, as we go downhill, there
will be similar change in elevation values for neighboring points, nearing the bottom of the hill,
however, we quickly notice that the elevation values changes rather quickly at the base of the hill,
whereas the plain beyond the hill once again takes on a certain similarity in elevational changes.
To more accurately depict the topography, we need to select points within a neighborhood that
demonstrate this surface similarity. This is done by a number of search techniques including
defining neighborhood by a predefined distance or radius from each point, auto corrected. If we
are hiking up a mountain, the topography changes in an upward direction between the starting
point and the summit: this is the drift.
But along the way, we find local drops denting the surface and accompanied by random but
correlated elevations. Along the way, we find boulders that must be stepped over, which can be
thought of as elevation noise because they are not directly related to the underlying surface
structure causing the elevational change in the first place. Elevation distance is measured with the
use of a statistical graphing technique called the semivariogram, which plots the distance between
samples, called the lags, on the horizontal axis: the vertical axis gives the semivariances, which is
defined as half the variance between each elevational value and each of its neighbors. As the
110
GIS Reader
distance between points increases there is a rapid increase in the semivariance, meaning that the
spatial dependency of values drops rapidly. Eventually a critical value of lag known as the range
occurs, at which point the variance levels off and stays essentially flat. Kriging is an exact method
of interpolation. Interpolation is most easily performed by isolating individual points and their
associated elevational values and converting them to an altitude point matrix described as non
linear progression that approximates curves or other forms of numerical series.
In Trends surfaces we use sets of points identified within a specified region. It can be relatively
flat, showing an overall trend for the entire coverage, or they can be relatively complex. The type
of equation used will determine the amount of undulation surface. The simpler trend surface
looks, the lower the degree it is said to have. For example, a first degree trends surface will show
a single plane that slopes across the coverage - that is, it said to be second -degree trend surface.
Kriging the final method of interpolation, known as kriging, optimizes the interpolation
procedure on the basis of the statistical nature of the surface. Kriging uses the idea of the
regionalized variable, which varies from place to place with some apparent continuity but cannot
be modeled with a single smooth mathematical equation. Kriging treat each of these surfaces as if
it were composed of three separate values. The first, called the drift or structure of the surface,
treat the surface as a general trend in particular direction. In Kriging there will be small variation
from this general trend, such as small peaks and depression in the overall surface that are random
but still related to one another spatiality. Finally random noise that is either associated with the
overall neither trend nor spatial
Use of interpolation
Interpolation is a useful technique for creating isolines that describes the surface with which you
are working. It can also be used to display the surface as a fishnet map or a shaded relief map.
Trend surface interpolation technique will provide information about the thickness of the ore body
as it slopes across the subterranean surface. In addition if one may want to know about the quality
of the ore seam. Here a kriging technique would prove useful because it is the nature of ore bodies
to exist as regionalized variables.
Problems in interpolation
As there are number of methods in interpolation, while performing any of them, however four
factors need to be considered:
1. The number of control points
2. The location of control points
3. The problem of saddle points
4. The area containing data points.
111
GIS Reader
It's safe to say that more sample point we have, the more accurate the interpolation will be. The
number of control or target points is frequently a function of the nature of surface. The more
complex the surface the more data point we need but for important feature of particular interest
such as depression and stream valleys we should also place more data points to capture the
necessary detailed, although the location of sample points relative to one another has an impact on
the accuracy of interpolation. The problem of sample placement is even more severing when we
consider interpolation from data collected by area to produce an isoplethic map. When the data
points are relatively evenly distributed its easiest to used centriod -of -cell method and center of
gravity method in sample point.
The Saddle point problem some time also called as alternative choice problem arises when both
member of one pair of diagonally opposite "Z" forming the corners of a rectangle are located
below and both the members of second pairs lie above the value of the interpolation algorithm, a
simple way to handle this problem is to average the interpolation value produce from the
diagonally placed control point and then place this average value at the center of the diagonal.
The final problem that must be considered in interpolation is common one in GIS operation
involving the area within which the data points are collected. More especially for the
interpolation to work properly the data points that are to be estimated through a process of
interpolation must have control points on all sides. When we approach the map margin we see
that interpolation routine is faced with control points on the two or three sides of our unknown
elevation points because the map border precludes any data plans beyond the margin. The
interpolation results are obtained when we are able to search a neighborhood in all direction for
selection of control points and determination of weights. Some times this procedurals occurs
because surface data were not part of the original design sometimes because the study area as
selected on the basis of confines of a single map and sometimes because of time limitations.
112
GIS Reader
Dem:
Digital Model of landforms data represented as point elevation values. TIN model, the basic
vector data structure for representing surfaces in the computer. However, the TIN model is one of
a number of methods of storing Z - value information, creating group of products collectively
called OEMs such methods are based either on mathematical models or on image models
designed to more closely approximate how they are normally sampled in the field or represented
on the paper. Although mathematical calculations are very useful, the currently available OEMs
are most often of image models of some description.
Image models of Z surfaces based on lines are nearly the graphical equivalent of the tradition
method of isarithmic mapping. In such cases models are produced by scanning or digitizing
existing contour lines or other isarithms. The purpose is to extract the form of the surface from the
lines that most commonly depict or describe that form. Once input the data are stored either as
lines entitles or as polygons of a particularly efficient to calculate slopes and aspects and to
produce shaded relief outputs from such data models it is more common to convert them to point
model from treating each point connecting each line segment as a sample location with an
individual elevation value. This procedure is known as a discreet altitude matrix. The discreet
altitude matrix is a point image method that represents the surface by a number of points each
containing a single elevation value.
Tin:
In raster, the geographic space is assumed to be discrete in that each grid cell occupies a specific
area. Within that discretized or quantized space, a grid cell can have encoded as an attribute the
absolute elevational value that is most representative of the grid cell. This might be the highest or
lowest value, or even an average elevational value for the grid cell. As such, the existing raster
data structures are quite capable of handling surface data. In vector however the picture is quite
different. Much of the space between the graphical entities is implied rather than explicitly
defined. To define this space explicitly as a surface, one must quantized the surface in a way that
retains major changes in surface
information and implies areas of identical elevation data.
Slope:
A common way of expressing slope is rise over reach, where rise is the change in elevation and
reach is the horizontal distance. The general method of calculating slope is to compute a surface
of best fit through neighboring points and measure the change in elevation per unit distance.
Specifically, the GIS will calculate the rise/reach value
through out the entire coverage, creating a set of categories of slope amount, much as we would
do when defining class limits. If we wish lower slope categories than are actually developed, we
can reclassify the set produced by the GIS. Although techniques designed to characterized
different neighborhoods by the amount of slope on a topographic surface are in common use, the
surface need not be a topographic one. Our idea of a surface can be generalized to apply to any
type of surface data that are measurable at the ordinal, interval or ratio levels, called a statistical
surface, which is a surface representation of these spatially distributed statistical data.
113
GIS Reader
most often done by looking at all grid cells in the data base and examining their neighbor cells, so
that the slope values for the entire coverage can be performed. The software fits a plane through
the eight immediate neighbor cells by finding either the greatest slope values for the
neighborhood of grid cells or an average slope.
For each group of cells, the software uses the grid cell resolution as the measure of distance, and
then compares the attribute values from the central cells to the surrounding
cells.
Aspect:
Because surfaces exhibit slopes, these feature are, by definition, oriented in a particular
direction, called the aspect. The two concepts of slope and aspect are inseparable from physical as
well as an analytical perspective. Without a slope, there is no aspect. There are numerous
applications of this technique. For example, bio-geographers and ecologists are aware that there is
generally a noticeable difference between the vegetation on slopes that face north and slopes that
face south. The primary reasons for this differential entail the availability of sunlight to green
plants, but our interest in the phenomenon is that GIS will allow us to separate out north versus
south facing slopes for comparison to related coverage such as soil and vegetation.
Geologists frequently want to know the prevailing slopes of fault blocks, or exposed folds, as a
path to understanding the underlying subsurface processes. Or a grower may want to place as
orchard on the sunny side of a hill to be able to take advantage of the maximum amount of
sunshine. All these determinations and many more can be performed through the use of
neighborhood functions that classify sloping surfaces based on their aspect.
Relief:
The simplest method of visualizing surface form is to produce a crass sectional profile of the
surface. This is common practice in many courses in map reading, geography and geology, where
students are asked to render the profile of a topographic surface along a line drawn between two
points. This is done by transferring the each elevational value to a sheet of graph paper where the
horizontal is exactly the same width as the line between the points and the vertical axis is scaled
to some vertical exaggeration of the original surface elevation values.
Both surface form techniques, whether raster or vector, are designed to produce neighborhoods
based on changes in surface value that can be interpreted by the user to represent specific features.
Thus, ridges, channels, peaks, water shade and soon man need to be identified as specific
topographic features for later analysis.
Hill Shading:
The process called visibility and intervisibility analysis recognizes that if you are located at a
particular point on a topographic surface, there are portions of the terrain you can see and others
you cannot see. The generalized term for the process is viewshed analysis, where by one defines
the regions that are visible from a particular point in the terrain. In
Vector, the simplest method is to connect a viewing location to each possible target in the
coverage.
Viewshed analysis is frequently confined to determining areas that are visible to a single viewer.
This is the visibility portion of viewshed analysis. However, there maybe situations where you
wish to not only know how much one can see from a particular vantage point but also to
determine how much of the terrain is visible from another's perspective, or intervisible. In military
applications, for example, you want to know whether your location is visible from possible enemy
positions. To do this involves the same method of ray tracing as before, but it will often have to
be performed once for each viewer location.
Raster methods of intervisibility operate in much the same way, but they are less elegant and
more computationally expensive. The process begins by defining a viewer cell as a separate
114
GIS Reader
coverage against which the elevation coverage will be tested. Starting at the location of the viewer
cell, the software evaluates the elevation that corresponds to that location. Then it moves out in all
directions, one grid cell at a time, comparing the elevation values of each new grid cell it
encounters with the elevation value of the viewer grid cell.
Most applications of intervisibility are based solely on topographic surfaces, but in some cases the
topographic surface will have forest cover with known individual heights or grouped heights
associated with the trees. To perform intervisibility where the heights of these or other obstructing
objects are known, the elevation coverage values must include the obstruction heights. These can
added in both vector and raster, usually by means of a mathematically base (addition)
combination of the two coverages.
115
GIS Reader
The data GIS uses is divided into two groups: spatial data and attribute or descriptive
data. GIS is the link between these two types of data. Spatial data is the way an object exists as a
natural, physical entity. It deals with location, shape and relationship with other objects. The
attribute data is descriptive data which deals with the features that are represented in the spatial
data and is qualitative information assigned to the object. For example, it could be a descriptive
name and other non-visual information about the area of interest recorded as spatial data. Both of
these combine to create our understanding of a cartographic image and thus assist in the formation
of a variety of maps. In the database system, these become key identifiers and allow the
researcher to access to specific features and link certain features to certain objects.
Processing and storing of information is another part of the mapping process. There are
two ways to process the spatial data. One is the raster method that records points, lines and areas
through a matrix. The second is the vector method which uses Cartesian coordinates to save
points that become lines, polygons, and 3-dimensional objects and volumes. Each type of model
is stored as ‘themed’ data sets. These data sets contain groups of layers that are ‘themed’ together
with specific mapping information. The goal of data sets is to allow the end-user to access
information quickly and easily.
This spatial data model is not continuous and is divided into smaller units located in
space. The image resolution is determined by the size of each cell, its coordinates, the overall
73
GIS Reader
grid size (i.e. the number of rows and columns grouped together) determines the quality of the
raster model. Because the computer system is storing the entity as a mesh and not multiple
points, the Raster computer storage files are often smaller than vector files.
The second type of spatial model is called a Vector model. In this type the ‘phenomena
and features’ in the area of interest are represented through points, lines, polygons and 3-
dimensional objects. The points are graphed as Cartesian co-ordinates or (x,y) and (x,y,z). A line
is a two points joined together; a polygon is a string of co-ordinates with the same starting and
ending point; 3-dimensional (3-D) objects are typically polygons joined at a variety of points with
lines. The 3-D entities include settlements, mountains, deep ravines, deep soil types, etc. In
today’s world when a researcher is in the field creating or updating maps, the points he or she are
locating are found using GPS (Global Positioning System) coordinates. Descriptions of the GPS
uses satellite communication to identify one’s exact location in the world.
Of the two types of spatial models, the vector model is more precise with the details of
the entities stored within it. It is advantageous in terms of resolution and ability to store the
changes that happen over time; however, it requires a more complex and ‘robust’ computer
system in which to store the information for calculating the displays (Embley and Nagey 1991).
The point of creating the layering system is that different users have different needs and
different points of control. A planning department may need to know ownership of a plot of land
or the location of a park within a city and it’s level of plantation. But the streets department may
need the center line of a road or the size of a sewerage line below the road. The data base can be
created that meets the needs of both. The combination of an organized layering system coupled
with thematic data or layer sets can meet the needs of both without overlapping of information,
unless requested.
The storage of the coverages is the same as previously described. There are two types of
data that are a part of the coverage: spatial data and attribute data. The discreet definitions help
the computer file and retrieve information for the user.
74
GIS Reader
Coverages are determined, defined and collected by a researcher who must understand the
overall physicality of the landscape and type of computer program being used to store the data.
This person looks to previous database information, maps and historic cartographic information to
evaluate the requirements for documentation.
Conclusion
The spatial data modeling system is a powerful data base tool that is useful in a variety of
applications. Creating a logical layering system assists the user in obtaining specific data that
may be required. Cartographers, urban planners, city engineers, architects, forest officials, to
name only a few, are the professionals that can benefit from accurate recording of areas on the
globe.
The GIS database becomes more accurate as technologic tools and equipment are used to
understand the attributes of phenomena and features on the earth’s surface. The use of previous
documentation allows us to understand our past and potentially predict future events and trends.
75
25. Spatial data- Representation of Geographic Features in
Vector and Raster Models
What is geographic data?
Geographic data are a special form of spatial data that is characterized by two crucial properties as
follows:
• The other that are normally recorded at relatively small scales and refered to as
geographical scale. As defined in the “Concepts &techniques of GIS” by Albert
K.WYeung.
Therefore the geographic data in short is a component of GIS that record locations and
characterstics of natural features or human activities that occur on or near earth surface.
The geographic data are categorized into three distinct types mentioned below:
• The geodetic control network
This provides a geographical frame work whereby different sets of geographic data can be
cross-referenced with one another. It is also the foundation of all geographic data.
This is normally created as the result of a basic mapping programme. It can be obtained
usually by using photogrammetry.
These are thematic data pertaining to specific GIS applications. It can be derived directly
from the topographic base.
The geographic data within the digital database are represented by three forms
• Spatial data – This represents features that have a known location to the earth.
• Attribute data – This is information linked to the geographic features (spatial data) that
describe those features.
• Data layers – These are results of combining spatial data and attribute data. Meaning
addition of data base to the spatial location.
• Layer types – These refers to the way spatial and attribute information are connected.
There are two major layers type i) Vector and ii) Raster.
• Typology – This refers to how geographic features are related to one another and where
they are in relation to one another.
Level of measurements:
The level of measurement is necessary for classification of data, measurements may be conducted
for a point, line and polygon features.
o Nominal – This is the lowest level of measurement in which data can only be
distinguished qualitatively, such as vegetation or soil type.
o Ordinal- This level of data can be ranked into hierarchies. Eg stream order or city
boundaries.
iv) Ratio – This is the highest level of measurement. It includes an absolute starting point. Eg.
Property value and distance.
26 Data products: Data formats, Ground segment
organization, Data product generation, referencing scheme
Gis Data
(1) Entity (spatial data ):
• a /a line/an area
• where things are data
• Ex. TAJMAHAL, a monument
• In Agra has reference in Terms of a latitude and Longitude.
• Special database structure is required to store the data
• Spatial entity types have the basic topographical properties of location dimension
and shape
DBMS :-
It is a computer program to control the storage, retrieval and modification of data ( in a
data base )
Functions of DBMS:
• File Handling & management (creating, modifying or deleting database structure)
• Adding, Updating and deleting RECORDS
• Extraction of information from data.
• Maintenance of data security and integrity .
• Application building.
Functions of DBMS :-
1. Security
2. Integrity
3. Synchronization
4. Physical data independence
5. Minimisation of redundancy
6. Efficiency
Components of DBMS :-
The interaction with database system is to perform the following broad types of tasks.
• Data definition
• Storage definition
• Database administration By user
• Data manipulation
Data in GIS:
Digital image data: - Original 320 row * 480 Column
Enlargement shows: - 1 20 row * 30 Column PIXLES
2 10 row * 15 Column PIX.ES
Reference Data:-
( Ground Truth ) Used to serve the following purpose
1. To aid in the analysis and interpretation of remotely sensed data
2. To calibrate a sensor
3. To verify information extracted from remote sensing data
Digital remote sensing images are collected in raster format. Accordingly digital Images
are inherently compatible spatially with other sources of information in a raster , domain
‘Raw ‘ images can be easily included directly as layer in a raster based GIS.
Overlay of Images between Raster data format and Vector data format can be done.
Raster images can be displayed as a backdrop for a vector overlay to the image.
GIS supports conversation between raster and Vector formats as well as the simultaneous
integration of raster and vector data.
27. Spatial data-Concept of Arcs, Nodes, Vertices &
Topology
Data Models available for Spatial Data Modeling in GIS:
• Computer-Aided Design (CAD)
• Graphical
• Image
• Raster
• Vector
• Network
• Triangulated Irregular Network (TIN)
• Object-Oriented
Access
Access
Vector Raster
DBase
DBase
Grid
Dynamic
Coverage
Coverage TIN Regions Object Oriented
segmentation
A schematic of data models; After Chang, 2002.
2. Data Models
2.1. Raster
Represent earth’s surface and objects on it with uniformly shaped cells or pixels of
the same size
Divides space into two-dimensional array (length and breadth)
Space filling approach, each cell has value
Typically square, but not necessarily
Use a common ground dimension for cells
Must have a projection (Otherwise gaps and overlaps)
Topology implicit – by virtue of cell layout and variability between cells
Location within layer defined by row and column starting in upper left hand corner
with 0,0 NOT 1,1
Georeferenced in real world coordinates (usually in a header file)
Header file also has number of rows, columns, cell size, and more metadata
Ground distance and area calculated from cell size
Attributes are represented by value within a cell, one value per cell,
Several attributes can be tied to a cell with a values attribute table
3.2.2. Lines
5 1
N4 V4
C 5 3
4 V3
Y N5
3 N2 N3
2
2 V2
A
1
0
0 1 2 3 4 5 6 7
Node X V=Vertix
Vertex N=Node
Arc, 1 Arc # A= Polygon
Representation of Topological Structure
From To Left Right
Arc
# Node Node Poly Poly
1 N1 N2 B A
2 N2 N3 B A
3 N3 N4 B D
4 N1 N4 D B
5 N5 N5 B C
6 N1 N3 A D
Arc
List of Nodes and Vertices
#
1 N1@4,6 V1@2,6 N2@1,3
2 N2@1,3 V2@3,2 N3@5,3
3 N3@5,3 N4@5,5
4 N1@4,6 N4@5,5
5 N5@2,4 N5@2.4
6 N1@4,6 V6@6,7 V5@7,6 V4@6,5 V3@6,4 N3@5,3
4.Topology
The topologic data structure is often referred to as an intelligent data structure because
spatial relationships between geographic features are easily derived when using them.
Primarily for this reason the topologic model is the dominant vector data structure
currently used in GIS technology. Many of the complex data analysis functions cannot
effectively be undertaken without a topologic vector data structure.
The secondary vector data structure that is common among GIS software is the computer-
aided drafting (CAD) data structure. This structure consists of listing elements, not
features, defined by strings of vertices, to define geographic features, e.g. points, lines, or
areas. There is considerable redundancy with this data model since the boundary segment
between two polygons can be stored twice, once for each feature. The CAD structure
emerged from the development of computer graphics systems without specific
considerations of processing geographic features. Accordingly, since features, e.g.
polygons, are self-contained and independent, questions about the adjacency of features
can be difficult to answer. The CAD vector model lacks the definition of spatial
relationships between features that is defined by the topologic data model.
This means that there are two potential methods used when working with features—one in
which features are defined by their coordinates and another in which features are
represented as an ordered graph of their topological elements.
3.8.1.1. Advantages:
• It used a simple structure to maintain topology.
• It enabled edges to be digitized and stored only once and shared by many features.
• It could represent polygons of enormous size (with thousands of coordinates)
because polygons were really defined as an ordered set of edges (or arcs).
• The topology storage structure of the coverage was intuitive. Its physical
topological files were readily understood by ArcInfo users.
3.8.1.2. Disadvantages:
• Some operations were slow because many features had to be assembled on the fly
when they needed to be used. This included all polygons and multipart features
such as regions and routes.
• Topological features (such as polygons, regions, and multipart lines called
"routes") were not ready to use until the coverage topology was built. If edges were
edited, the topology had to be rebuilt. (Note: Partial processing was eventually
used, which required rebuilding only the changed portions of the coverage
topology.) In general, when edits are made to features in a topological dataset, a
geometric analysis algorithm must be executed to rebuild the topological
relationships regardless of the storage model.
• Coverages were limited to single-user editing. Because of the need to ensure that
the topological graph was synchronized with the feature geometries, only a single
user at a time could update a topology. Users would tile their coverages and
maintain a tiled database for editing. This enabled individual users to "lock down"
and edit one tile at a time. For general data use and deployment, users would
append copies of their tiles to a mosaicked data layer.
28. Spatial data-computer representation for storing
spatial data.
Spatial data:-
Databases
• A database is like a storehouse which is capable of storing large amounts of data. It comes
with a number of useful functions:
• It can be used by several users at a particular point of time – i.e., it allows concurrent use
• It offers a number of techniques for storing and allows to use the most efficient one- i.e.,
it supports storage optimization
• It allows to force rules on the stored data, which will be automatically checked after each
update to the data- i.e., it supports data integrity
• It offers an easy to use manipulation language, which allows to perform all sorts of
drawing out of the data and data updates- i.e., it has a query facility
• It will try to execute each query in the data manipulation language in the most efficient
way-i.e., it offers query optimization
Spatial databases
Spatial databases are a specific type of database. They store representations of geographic
phenomena in the real world to be used in a geographic information system. The spatial data is
different in the sense that they use methods other than tables to store the representations. This is
because it is not easy to store and represent the geographic information using tables. A spatial
database is not the same as a GIS, although both have some common characteristics. The spatial
data is concentrated on the functions mentioned above. While a GIS, is concentrated on the
operations of the spatial data which requires better understanding of the geographic space. The
spatial data to be stored can consist of point, line, area or image. Different storage and
compression techniques exist for each of them.
Computer Representation of spatial Data
A computer must be instructed exactly how spatial patterns should be handled and displayed.
There are two formats:
Vector
Grid cell or raster
Vector
With the vector format a set of lines, defined by start and end points as well as some form of
connectivity, completely represent an object.
Raster
With the raster format a set of points on a grid clearly represent an object and the computer
assigns a common code (symbol or color) to each cell.
Both the formats have certain disadvantages and certain advantages. There is no unique
connection between the vector and raster structure of a geographic database. Also, in GIS, a
combination of both the formats is used.
Raster data structure
The raster data structures consist of an array of grid cells or pixels referenced by a row and
column number and containing a number representing the type or value of the parameter being
mapped. The 2-dimensional surface via which the geographical data are linked is not continuous
and this can have an important effect on the estimates of lengths and areas when grid cell sizes are
large with respect to the features being represented. In the raster format, a range of different
methods is used to encode a spatial data entered in order to store and represent. There are four
methods in which compact storage can be achieved:
Chain codes
Run-length codes
Block codes
Quadtrees
The actual facts of a situation, without errors introduced by sensors or human perception
and judgment. For example, the actual location, orientation, and engine and gun state of
an M1A1 tank in a live simulation at a certain point in time is the ground truth that could
be used to check the same quantities in a corresponding virtual simulation.
Data collected on the ground to verify mapping from remote sensing data such as air
photos or satellite imagery.
To verify the correctness of remote sensing information by use of ancillary information
such as field studies.
In cartography and analysis of aerial photographs and satellite imagery, the
ground truth is the facts that are found when a location is field checked -- that is,
when people actually visit the location on foot.
Chain codes
Chain codes can be known as a boundary or a border code and is used in cartographic applications
since they work by defining the boundary of the data. The chain code of a region is specified with
a reference to the starting point and with a sequence of unit vectors in a way that the interior
region remains towards the right of the vectors. The directions can be represented by numbers.
Chain codes with more than four directions can also be used .chain codes are not only compact
but they can simplify the detection of features of a region boundary but on another hand they do
not exhibit the properties of elongatedness and set operations such as union and intersection as
well.
Pictures showing ground truth of a satellite image with respect to person measuring on ground
Note: As can be seen above the grid cell or pixel size greatly affects the amount of detail that is
preserved in converting from vector to raster format. Also area and perimeter calculations will be
map altered.
Block codes
This method is an add on to the run length encoding method by making it two-dimensional by
using a sequence of square blocks to store data. The data structure consists of the origin (center or
bottom left) and side length of each square. This method is also called medial axis transformation
(MAT).
Quadtree
One of the benefits of the raster model is that each cell can be subdivided smaller cells of similar
shape and direction. This unique feature of raster model has lead to development of several
innovative data storage and representation methods that are based on regularly subdividing the
space. Quadtree is a commonly used technique based on recursive decomposition of space its
development has been noted to a large extent by a desire to save shortage by aggregating data
having similar or identical values. The saving in the aggregation time that arises from this
aggregation is of great importance. The lowest limit of division here is the single pixel. This leads
to a tree structure of degree 4 because each node has 4 branches, namely the NW, NE, SW, and
SE quadrants.
29Non spatial data- RDBMS, concepts, components, Database
scheme, Relationship-one to one, one-to-many
Definition:
Non spatial information about a geographic feature in a GIS, usually stored in a table and linked
to the feature by a unique identifier. For example, attributes of a river might include its name,
length, and sediment load at a gauging station.
In raster datasets, information associated with each unique value of a raster cell.
Information that specifies how features are displayed and labeled on a map; for example, the
graphic attributes of a river might include line thickness, line length, color, and font for labeling.
In MOLE, aspatial information about a geographic feature in a GIS, usually stored in a table and
linked to the feature by a unique identifier. For example, attributes of a force element might
include its name and speed. Most MOLE attributes are what some military specifications refer to
as labels or modifiers.
1. Its a type of database management system (DBMS) that stores data in the form of related
tables. Relational databases are powerful because they require few assumptions about
how data is related or how it will be extracted from the database. As a result, the same
database can be viewed in many different ways.
2. An important feature of relational systems is that a single database can be spread across
several tables. This differs from flat-file databases, in which each database is self-
contained in a single table. Almost all full-scale database systems are RDBMS's. Small
database systems, however, use other designs that provide less flexibility in posing
queries.
Concept:-
Two important pieces of RDBMS architecture are the kernel, which is the software, and the data
dictionary, which consists of the system-level data structures used by the kernel to manage the
database
You might think of an RDBMS as an operating system (or set of subsystems), designed
specifically for controlling data access; its primary functions are:-
• Storing, retrieving, and securing data.
• An RDBMS maintains its own list of authorized users and their associated privileges.
• Manages memory caches and paging.
• Controls locking for concurrent resource usage;
• Dispatches and schedules user requests;
• Manages space usage within its table-space structures.
Components:-
1. The Database Server:-
This takes the SQL, decides how to execute it, with a sub-Component called the Query
Optimizer, and produces a Query Execution Plan.
It is possible to have many Database Server processes running
simultaneously, with each one tailored to a particular kind of "SQL Query".
2. An Archive Process:-
This writes completed Transactions onto a Journal or History File and deletes them from the
Log File. This is done to avoid the Log File getting filled up because then everything fails and
the Servers have to be brought own to recover.
This is an embarrassing process. The worst part is that as the DBA, you often do not know
that the Archive process is not running ntil the Log File fills up, no more transactions can
start, everybody program hangs and the phone rings off the hook.
3. A Recovery Process:-
The Recovery Process handles the situations where is there is a Database crash and it recovers
to the last known point at which the Database was running OK and had 'integrity'.
In other words, all the data representing a consistent set of related records had been written to
the Database at the end of a committed Transaction, with no open Transactions.
Database scheme:-
To define the database schema used by the RDBMS security realm:
• In the left pane, expand Compatibility Security > Realms and click the name of the
RDBMS security realm.
• Under Configuration > Schema for the RDBMS security realm, define the schema used
to store Users, Groups, and ACLs in the database in the Schema Properties box. The
following code example contains the database statements entered in the Schema
properties for the RDBMS code example shipped with WebLogic Server in the
/samples/examples/security/rdbmsrealm directory.
Enter, or select from the combo box drop down list, the Java package name that the generated
classes will belong to, or leave blank for no package. If necessary, enter, or select from the combo
boxes, the catalog and schema where the tables are located. You may select a predefined search
pattern from the Catalog, Schema, and Table pattern combo boxes, or enter your own search
pattern. A table search pattern allows you filter the tables displayed based on the table names. The
Table type option allows you to specify whether to display only tables, only views, both tables
and views, or all table-like objects. The Available list automatically displays the names of all
tables found that match the search criteria.
Catalogs and schemas refer to the organization of data in relational databases, where data is
contained in tables, tables are grouped into schemas, and schemas are grouped into catalogs. The
terms catalogs and schemas are defined in the SQL 92 standard but are not applicable to all
databases. (It is important to note that term schema as used in this section does not refer to the
same 'schema objects' that the mapping tool manipulates.) For example, in desktop databases
such as MS Access there are no such concepts. Also, many databases use slightly different
variations of these terms. For example, in SQL Server and Sybase, tables are grouped by owner,
and catalogs are databases. In this case a list of database names in shown in the catalogs field, and
a list of table owners in the schemas field. It is also very common that the owner of all tables is
the database administrator, so if you do not know the actual owner name, select 'dbo' (under SQL
Server or Sybase), or the actual name of the database administrator.
The following are predefined search patterns that can be selected from the Catalog, Schema, and
Table pattern combo boxes drop down lists:
• [N/A]: Not Applicable. This is the default entry. It means to drop the item from the search
criteria when getting a list of tables. This is usually the best setting for databases for
which the concept of a catalog and/or schema does not apply, such as MS Access.
• [All Catalogs/Schemas/Tables]: Searches for all tables under all catalogs and/or
schemas.
• [No Catalog/Schema]: Searches for all tables that do not belong to a catalog and/or
schema.
• [Current Catalog]: Searches for all tables in the catalog corresponding to the current
connection. This is usually the best setting for databases for which a catalog is
synonymous with a database, such as SQL Server. This entry is only available in the
Catalog combo box.
Relationships:-
You have a 1-to-1 relationship when an object of a class has an associated object of another class
(only one associated object). It could also be between an object of a class and another object of
the same class (obviously). You can create the relationship in 2 ways depending on whether the 2
classes know about each other (bidirectional), or whether only one of the classes knows about the
other class (unidirectional). These are described below.
The various possible relationships are described below.
• Unidirectional (where only 1 object is aware of the other)
• Bidirectional (where both objects are aware of each other)
• Unidirectional "Compound Identity" (object as part of PK in other object)
Unidirectional:- For this case you could have 2 classes, User and Account, as below.
so the Account class knows about the User class, but not vice-versa. If you define the Meta-Data
for these classes. This will create 2 tables in the database, one for User (with name USER), and
one for Account (with name ACCOUNT and a column USER_ID) as follows:-
Things to note :-
• Account has the object reference (and so owns the relation) to User and so its table holds
the foreign-key
• If you call PM.deletePersistent() on the end of a 1-1 unidirectional relation without the
relation and that object is related to another object, an exception will typically be thrown
(assuming the RDBMS supports foreign keys). To delete this record you should remove
the other objects association first.
Bidirectional:-
For this case you could have 2 classes, User and Account again, but this time as below. Here the
Account class knows about the User class, and also vice-versa.
Here we create the 1-1 relationship with a single foreign-key. To do this you define the MetaData.
The difference is that we added mapped-by to the field of User. This will create 2 tables in the
database, one for User (with name USER), and one for Account (with name ACCOUNT including
a USER_ID). The fact that we specified the mapped-by on the User class means that the foreign-
key is created in the ACCOUNT table.
Things to note :-
• The "mapped-by" is specified on User (the non-owning side) and so the foreign-key is
held by the table of Account (the owner of the relation)
• When forming the relation please make sure that you set the relation at BOTH sides
since JPOX would have no way of knowing which end is correct if you only set one end.
The key to transforming XML into a RDBMS is analyzing the relationships in an XML document
and then mapping those relationships into a RDBMS.
Let's examine the kinds of relationships utilized by a RDBMS - there are three:
1 to 1 relationship (1:1)
We are only interested in the simplest case - the primary entity must participate in the relationship
but the secondary entity may not. e.g. I own 1 car but my 1 car does not own me (or does it????)
This relationship is modeled by storing the secondary entity's primary key as a foreign key in the
primary entity's table.
2. 1 to N relationship (1:N)
There is only one case for our purposes - the primary entity may possess multiple secondary
entities.
e.g. I own zero or more books.
This relationship is modeled by storing the primary entity's (the '1') primary key as a foreign key
in the secondary entity's (the 'N') table.
3. N to N relationship (N:N)
For the purposes of transforming XML we do not need these
e.g. the relationship between students and classes - each student can have multiple classes and
each class can have multiple students.
This relationship is modeled by creating a new table whose rows hold the primary key from
each foreign table.
30. Non spatial data:SQL, query, processing, operations
Definition:
GIS
GEOGRAPHY DATA
Geography:
The first type, “geography”, will store points, lines, polygons, and collections of these in
latitude/longitude coordinates using a round-Earth model. Most commonly-available data is
given in latitude/longitude coordinates by using GIS, which is referred as spatial data.
Spatial data :
The basic spatial entities are points, lines and areas which can be represented using two different
approaches raster and vector.
What is a Query?
A query is a command you give your database program that tells it to produce certain specified
information from the tables in the memory.
Queries:
The most common operation in SQL databases is the query, which is performed with the
declarative SELECT keyword. SELECT retrieves data from a specified table, or multiple related
tables, in a database. While often grouped with Data Manipulation Language (DML) statements,
the standard SELECT query is considered separate from SQL DML, as it has no persistent effects
on the data stored in a database. Note that there are some platform-specific variations of SELECT
that can persist their effects in a database, such as Microsoft SQL Server's proprietary SELECT
INTO syntax.[11]
SQL queries allow the user to specify a description of the desired result set, but it is left to the
devices of the database management system (DBMS) to plan, optimize, and perform the physical
operations necessary to produce that result set in as efficient a manner as possible. A SQL query
includes a list of columns to be included in the final result immediately following the SELECT
keyword. An asterisk ("*") can also be used as a "wildcard" indicator to specify that all available
columns of a table (or multiple tables) are to be returned. SELECT is the most complex statement
in SQL, with several optional keywords and clauses, including:
• The FROM clause which indicates the source table or tables from which the data is to be
retrieved. The FROM clause can include optional JOIN clauses to join related tables to one
another based on user-specified criteria.
• The WHERE clause includes a comparison predicate, which is used to restrict the number
of rows returned by the query. The WHERE clause is applied before the GROUP BY
clause. The WHERE clause eliminates all rows from the result set where the comparison
predicate does not evaluate to True.
• The GROUP BY clause is used to combine, or group, rows with related values into
elements of a smaller set of rows. GROUP BY is often used in conjunction with SQL
aggregate functions or to eliminate duplicate rows from a result set.
• The HAVING clause includes a comparison predicate used to eliminate rows after the
GROUP BY clause is applied to the result set. Because it acts on the results of the GROUP
BY clause, aggregate functions can be used in the HAVING clause predicate.
• The ORDER BY clause is used to identify which columns are used to sort the resulting
data, and in which order they should be sorted (options are ascending or descending). The
order of rows returned by a SQL query is never guaranteed unless an ORDER BY clause is
specified
Some typical Operations that you can perform are: add records from one table to another table,
import or export spreadsheets or text files, post values from one table to another, and update the
field values in all, or a subset of records; just to name a few.
In SQL, you can change data at any time by selecting record and entering new values. This
method works well when you are editing a few records, but can become very time consuming
when you are working with hundreds or thousands of records. To handle larger data manipulation
tasks.
Term Description
Operation A process in which SQL manipulates data. Data might come from one or
more tables.
Transaction Table A table used in an Operation which generally is not changed by the
Operation.
Result Table Contains the output of an Operation, depending on the Operation type.
SQL has a variety of Operation types that let you transform data. The following table describes it.
Operation Description
Mark, Unmark, and Marks, unmark, or deletes duplicate records in the master table.
Delete Duplicate records
Export and Import Sends to and receives records from common file formats, such as
ASCII text and those used by Microsoft Excel and Lotus 1-2-3.
Post data Adds, subtracts, or replaces values in the master table with values from
matching records in the transaction table.
Query records Selects and sorts specific records in a table, and saves the query for
future use.
Update records Changes values in the master table using criteria you specify.
Convert case of fields Changes text to uppercase, lowercase, or mixed case in one or more
fields.
Search and replace text Searches in one or more fields for a value, and replaces it with another
value.
Copy records Copies selected records from a table or a set to a new table, the result
table. You can use copy with a set, to copy values from multiple tables
to a single table.
Cross tab Creates a result table whose field names correspond to field values in
the master table. The field data are cross tabulated summary values.
Intersect records Creates a result table with records that are common to both the master
and transaction tables.
Join tables Create a result table containing fields from both the master and
transaction tables.
Subtract records Creates a result table by subtracting records in one table from another
table.
Summarize records Creates a result table that summarizes records in the input master table.
SQL can help you perform complex Operations, such as Update Operations, that can do the
following tasks:
Conclusion:
Thus where in the spatial data deals with space , geography , location of a particular thing , the
non spatial data deals with numbers and time which can be solved by running a query in RDBMS
or SQL server as mentioned in the above assignment.
31Spatial Data Input : Digitization, Error Identification,
Types and Sources of Error, Correction, Editing, Topology
Building.
Definition:
Data input is the process of encoding data into computer-readable format and assigning the
spatial data to a Geographic Information System (GIS).
Spatial data:
The transformation from the spherical geographic grid to a plane coordinate system is called
map projection. Hundreds of map projections have been developed for map making. Every map
projection preserves certain spatial properties while sacrificing other properties. Spatial features
may be discrete or continuous. Discrete features are those that do not exist between observations,
form separate entities and are well individually well distinguishable. for ex. Well, roads, etc.
continuous features exist spatially between observations. Precipitation and elevation are examples
of continuous features. GIS uses two basic data models to represent spatial features:
• vector
• raster.
The vector data model uses points and their x-, y-, coordinates to construct spatial features of
points, lines and areas. The raster data model uses a grid to represent the spatial variation of a
feature. Each cell in the grid has a value that corresponds to the characteristic of the spatial feature
of that location.
Digitization:
Digitizing Vector
Although vector data structure is the choice as the primary form for handling graphical data in
most GIS and CAD packages, vector data acquisition is often more difficult than raster image
acquisition, because its abstract data structure, topology between objects and attributes associated.
In the following, we explain the commonly used methods for getting vector data, their advantages
and drawbacks.
Manual digitizing
Manual digitizing using a digitizing tablet has been widely used. With this method, the operator
manually traces all the lines from his hardcopy map using a pointer device and create an identical
digital map on his computer. A line is digitized by collecting a series of points along the line.
Although this method is straight forward, it requires experienced operator and is very time
consuming. For a complex contour map, it can take a person 10 to 20 days to get the map fully
digitized.
Another major drawback of this method is its low accuracy. The accuracy of manual digitizing
merely depends on how accurate the hardcopy map is duplicated on a computer by hand. The
spatial accuracy level the human hand can resolve is about 40 DPI (dots per inch) in the best case
and will be lower while the operator is tired and bored after working on it for a period of time.
One experiment was done at a university, a group of geography students were asked to digitize
the same map and the final digitized maps were overlaid on top of each other to create a new map.
The result is not surprising, the new map is heavily distorted as compared to the original map.
Manual digitizing is supported by most GIS packages with direct link to a digitizing tablets
through a computer I/O port.
Heads-up digitizing is similar to manual digitizing in the way the lines have to be traced by hand,
but it works directly on the computer screen using the scanned raster image as backdrop. While
lines are still manually traced, the accuracy level is higher than using digitizing tablet because the
raster images are scanned at high resolution, normally from 200 DPI to 1600 DPI. With the help
of the display tools, such as zoom in and out, the operator can actually work with the resolution of
the raster data therefore digitize at a higher accuracy level. However, the accuracy level is still not
guaranteed because it is highly dependent on the operator and how he digitizes. This method is
also time-consuming and takes about same amount of time as the manual digitizing method.
The interactive tracing method automates individual line tracing process by tracing one line at a
time under the guidance of the operator. This is a significant improvement over manual heads-up
digitizing in terms of digitizing accuracy and speed, especially when fully automatic raster to
vector conversion can not be applied in cases such as low image quality and complex layers. The
main advantage of using interactive tracing is the flexibility of tracing lines selectively and better
operator control.
Automatic Digitizing:
Two digitizing methods are considered here: scanning and automatic line following. Scanning is
the most commonly used method of automatic digitizing. Scanning is an appropriate method of
data encoding when raster data are required, since this is the automatic output format from most
scanning software. Thus, scanning may be used to input a complete topographic map that will be
used as a background raster data such as pipelines or cables. In this case a raster background map
is extremely useful as a contextual basis for the data of real interest. Another type of automatic
digitizer is the automatic line follower. This encoding method might be appropriate where digital
versions of clear, distinctive lines on a map are required ( such as country boundaries on a world
map, or clearly distinguished railways on a topographic map). The method mimics manual
digitizing and uses a laser – and light – sensitive device to follow the lines on the map. Whereas
scanners are raster devices, the automatic line follower is a vector device and produces output as
(x,y) coordinate strings.
TOPOLOGY
Topology is implemented as a set of integrity rules that define the behavior of spatially related
geographic features and feature classes. Topology rules, when applied to geographic features or
feature classes in a geodatabase, enable GIS users to model such spatial relationships as
connectivity (are all of my road lines connected?) and adjacency (are there gaps between my
parcel polygons?). Topology is also used to manage the integrity of coincident geometry between
different feature classes (e.g., are the coastlines and country boundaries coincident?).
Topology applies GIS behaviors to spatial data. Topology enables GIS software to answer
questions such as adjacency, connectivity, proximity, and coincidence. In ArcGIS, a topology
provides a powerful and flexible way for users to specify the rules for establishing and
maintaining the quality and integrity of your spatial data. You want to be able to know, for
example, that all your parcel polygons completely form closed rings, they don't overlap one
another, and there are no gaps between parcels. You can also use topology to validate the spatial
relationships between feature classes. For example, the lot lines in your parcel data model must
share coincident geometry with the parcel boundaries.
How Is Topology Modeled in the Geodatabase? In ArcGIS, a topology can be defined for one or
more of the feature classes contained in a feature data set. It can be defined for multiple point,
line, and polygon feature classes. A topology is a set of integrity rules for the spatial relationships
along with a few important properties: a cluster tolerance, feature class ranks (for coordinate
accuracy), errors (rule violations), and any exceptions to the rules you've defined. ArcEditor and
ArcInfo include a topology wizard to select which feature classes will participate in a topology
and define these properties.
Topology rules
Topology rules can be defined for the features within a feature class or for the features between
two or more feature classes. Example rules include polygons must not overlap, lines must not
have dangles, points must be covered by the boundary of a polygon, polygon class must not have
gaps, lines must not intersect, and points must be located at an endpoint. Topology rules can also
be defined for the subtypes of a feature class. Geodatabase topology is flexible since you select
which rules apply to the data in your feature class or feature data set.
Topology Properties
The cluster tolerance is similar to the fuzzy tolerance. It is a distance range in which vertices are
considered coincident. Vertices and endpoints falling within the cluster tolerance are snapped
during the validate topology process.
Coordinate accuracy ranks are defined at a feature class level and control how much the features
in that class can potentially move in relation to features in other classes when a topology is
validated. The higher the rank (one being the highest), the less the features move during the
validate process.
The ArcInfo coverage model explicitly defines, stores, and maintains the topological information
within the coverage structure and employs a fixed set of tools for creating and maintaining
topology. The result is a tightly controlled environment in which the work flow is dictated by the
software and topological integrity is steadfastly maintained. The data model does not allow much
flexibility. Thus, application development (ArcEdit macros) for editing is required to build and
maintain more sophisticated data models than many GIS applications require.
In ArcGIS, geodatabase topology provides a powerful, flexible way for you to specify the rules
for establishing and maintaining the quality and integrity of your data, as well as providing a suite
of tools specifically designed to support topological geodatabase editing and maintenance (see
sidebar). The benefits of defining a topology in the geodatabase model include
Topology in the geodatabase model offers a more flexible environment along with the ability to
define and apply a wider set of integrity rules and constraints. As a result, almost any work flow
can be employed in which topological integrity is analyzed only at designated times specified by
the user. The user is no longer forced to rerun a clean command to rebuild topology. The user can
choose to validate the geodatabase topology at any time, perform queries and analyses using the
geodatabase data, and continue to produce high-quality maps.
ERRORS IN DIGITIZATION
Error–Flaw in data Error is the physical difference between the real world and the GIS.Goes
beyond mere mistakes. Includes technical issues such as GIS operations, processing algorithms,
misuse of statistics, operator bias, equipment quality, etc.
Spatial data errors can occur in each of the methods listed above. And because data is shared
among many in the GIS community and used for legal matters the spatial data set should identify
its data quality. Spatial data documentation should include the history of a data set, the source
date, positional and attribute accuracy, completeness of the data set, and the processing method
used to create the spatial data. Knowledge of this information helps the user to determine the
usability and liability of spatial data. The ability to identify and rectify spatial data errors allows
the user to get the maximum quality and usage out of a data set.
TYPES OF ERRORS
• Spatial errors
• Attribute errors
• Procedural/Analytic errors
• Digitizing Errors
– Systematic errors are often related to inaccurate geo-registration
– Random errors can be introduced by missed or inaccurately drawn features
• Attribute data entry errors
– Humans often make errors in transcribing attributes into GIS
• Equipment Errors
– Occasionally scanners, digitizing tablets, etc. can go off calibration
• Error is closely related to accuracy (i.e., Higher accuracy implies fewer errors).
• Three classes of errors:
– Gross errors – refer to “mistakes”. They can be detected and avoided via well-
designed and careful data collection.
– Systematic errors – occur due to factors such as human bias, poorly calibrated
instruments, or environmental conditions.
– Random errors – They cannot be avoided and can be treated with
mathematical/statistical models.
Undershoots Overshoots
The validate topology operation is used to snap feature geometry where vertices fall within the
cluster tolerance and to check for violations of the specified topology rules. Validate topology
begins by snapping together feature vertices that fall within the cluster tolerance taking into
account the ranks (as described above) of the feature classes. If feature vertices are found within
the cluster tolerance, the features from the feature class with the lowest rank of coordinate
accuracy will be moved to the features with the higher rank. As part of the snapping routine,
validate topology will also add vertices where features intersect if a vertex does not already exist.
Also, any rule violations discovered during validate topology are marked as errors. A complete
error listing is available in the properties of the topology in ArcCatalog and ArcMap. In ArcMap,
errors can be searched for, displayed, or listed in the Error Inspector.
When an error is discovered during the validate topology operation, the user has three options:
1. Correct the error using the Fix Topology Error tool or some other method.
2. Leave the error unresolved.
3. Mark the error as an exception. The Fix Topology Error tool offers a variety of methods
for resolving an error depending on the error and the feature type.
Rasterization errors
Vector to raster conversion can cause an interesting assortment of errors in the resulting data. For
example
• Topological errors
• Loss of small polygons
• Effects of grid orientation
• Variations in grid origin and datum
•
Topological error in vector GIS:
(a) loss of connectivity and creation of false connectivity
(b) loss of information
• Measurement errors: accuracy (ex. Altitude measurement or soil samples, usually related
to instruments)
• Computational errors: precision (ex. to what decimal point the data is represented?)
• Human error: error in using instruments, selecting scale, location of samples
• Data model representation errors
• Errors in derived data
Data quality issues: Sources of error in GIS
Refers to the removal of errors from, and updating of, digital maps. Newly digitized maps, no
matter how carefully prepared, always have some errors. Digital maps downloaded from the
internet may contain errors from initial digitizing or from outdated data sources. Spatial Data
Editing covers two types of errors. Location errors such as missing polygons or distorted lines
relate to inaccuracies of map features, while others such as dangling arcs and unclosed polygons
relate to logical inconsistencies among map features. To correct location errors, one often has to
reshape individual arcs and digitize new arcs. To correct topological errors, one must be
knowledgeable about the topological relationships required and use a topology – based GIS
package to help make corrections.
Spatial Data Editing can go beyond individual digital maps. When a study area covers more than
and source map, editing must be expanded to cover errors in matching lines across the map
border. Spatial Data Editing may also include line simplification, line smoothing, and transferring
of map features between maps.
Most GIS packages will provide a suite of editing tools for the identification and removal of
errors in vector data. Corrections can be done interactively by the operator ‘on screen’, or
automatically by the GIS software. However, visual comparison of the digitized data against the
source document, either on paper or on the computer screen, is a good starting point. This will
reveal obvious omissions, duplications and erroneous additions. Systematic errors such as
overshoots in digitized lines can be corrected automatically by some digitizing software, and it is
important for data to be absolutely correct if topology is to be corrected for a vector data set.
Noise may be inadvertently added to the data, either when they were first collected or during
processing. This noise often shows up as scattered pixels whose attributes do not confirm to those
of neighboring pixels. This form of error may be removed by filtering. Filtering is considered in
this book as an analysis technique but in brief, it involves passing a filter ( a small grid of pixels
specified by the user – often a 3x3 pixel square is used) over the noisy data set and recalculating
the value of the central (target) pixel as a function of all the pixel values within the filter. This
technique needs to be used with care as genuine features in the data can be lost if too large a filter
is used.
32. Automating the overlay process
Overlay operations involve the placement of one map layer (set of features) A, on top of a second
map layer, B, to create a map layer, C, that is some combination of A and B. C is normally a new
layer, but may be a modification of B. Layer A in a vector GIS will consist of points, lines and/or
polygons, whilst layer B will normally consist of polygons. All objects are generally assumed to
have planar enforcement, and the resulting object set or layer must also have planar enforcement.
The general term for such operations is topological overlay, although a variety of terminology is
used by different GIS suppliers, as we shall see below. In raster GIS layers A and B are both grids,
which should have a common origin and orientation, if not, resampling is required.
The process of overlaying map layers has some similarity with point set theory, but a large
number of variations have been devised and implemented in different GIS packages. The
principal operations have previously been outlined as the spatial analysis component of the OGC
simple features specification. The Open Source package, GRASS, is a typical example of a GIS
that provides an implementation of polygon overlay which is very similar to conventional point
set theory (Figure1), with functions provided including:
• Intersection, where the result includes all those polygon parts that occur in both A and B
• Union, where the result includes all those polygon parts that occur in either A or B, so is
the sum of all the parts of both A and B
• Not, where the result includes only those polygon parts that occur in A but not in B
(sometimes described as a Difference operation), and
• Exclusive or (XOR), which includes polygons that occur in A or B but not both, so is the
same as (A Union B) minus (A Intersection B)
TNTMips provides similar functionality and uses much the same terminology as GRASS (AND,
OR, XOR, SUBTRACT) under the heading of vector combinations rather than overlay
operations, and permits lines as well as polygons as the “operator” layer (Figure1).
In land suitability assessment, the map overlay technique is often used in conjunction with a
weighting scheme. A person first determines parent maps' weights by his perceptions about the
importance or relative importance of these maps to land suitability. These weight values are then
incorporated into the map overlay process. On the resultant overlaid maps, the higher suitability
scores are always assigned to those sites that have better conditions on the more important parent
maps. Of the two approaches that one can take in determining maps' weights, tradeoff weighting
is more precise than direct assessment, but also more difficult to use because it requires greater
cognitive efforts from the users. This article presents a weighting-by-choosing method that
facilitates the process of making tradeoffs through a series of site selection exercises. By using
hypothetical reference sites as tangible manipulatives, it transforms an otherwise difficult
cognitive task into a simple selection exercise. At present, the method applies to two maps at a
time, but could potentially be extended to multiple maps.
OVERLAY OPERATIONS:
The hallmark of GIS is overlay operations. Using these operations, new spatial elements are
created by the overlaying of maps.
There are basically two different types of overlay operations depending upon data structures:
1.RASTER OVERLAY- It is a relatively straightforward operation and often many data sets can
be combined and displayed at once.
2.VECTOR OVERLAY-The vector overlay, however is far more difficult and complex and
involves more processing.
LOGICAL OPERATORS:
The concept of map logic can be applied during overlay. The logical operators are Boolean
functions. There are basically four types of Boolean Operators: viz., OR, AND, NOT, and XOR.
With the use of logical, or Boolean, operators spatial elements / or attributes are selected that
fulfill certain condition, depending on two or more spatial elements or attributes.
1.VECTOR OVERLAY
During vector overlay, map features and the associated attributes are integrated to produce new
composite maps. Logical rules can be applied to how the maps are combined. Vector overlay can
be performed on different types of map features: viz.,
Polygon-on-polygon overlay
Line-in-polygon overlay
Point-on-polygon overlay
During the process of overlay, the attribute data associated with each feature type id merged. The
resulting table will contain both the attribute data. The process of overlay will depend upon the
modelling approach the user needs. One might need to carry out a series of overlay procedures to
arrive at the conclusion, which depends upon the criterion.
Polygon-on-Polygon Overlay
FIGURE 2:Difference between a Topologic Overlay and a Graphic Over plot
2.Raster Overlay
In raster overlay, the pixel or grid cell values in each map are combined using arithmetic and
Boolean operators to produce a new value in the composite map. The maps can be treated as
arithmetical variables and perform complex algebraic functions. The method is often described as
map algebra. The raster GIS provides the ability to perform map layers mathematically. This is
particularly important for the modelling in which various maps are combined using various
mathematical functions. Conditional operators are the basic mathematical functions that are
supported in GIS.
Conditional Operators
Conditional operators were already used in the examples given above. The all evaluate whether a
certain condition has been met.
= eq 'equal' operator
<> ne 'non-equal' operator
< lt 'less than' operator
<= le 'less than or equal' operator
> gt 'greater than' operator
>= ge 'greater than or equal' operator
Many systems now can handle both vector and raster data. The vector maps can be easily draped
on to the raster maps.
Raster Overlay
APPLICATION:
A Physical Evaluation of Land Suitability for Rice
The objectives of this study was to establish spatial model in land evaluation for rice
using GIS.
The study area, the lower Namphong watershed, covers an area of about 3000 sq. kms
and is located in Northeast Thailand. A land unit resulting from the overlay process of the
selected theme layers has unique information of land qualities for which the suitability is
based on. The selected theme layers of rice include water availability, nutrient
availability, landform, soil texture and salinization of soil. The theme layers were
collected from existing information and satellite data. Analysis of rainfall data and
irrigation area give the water availability. Spatial information of nutrient availability was
formulated using soil map of Land Development Department. Landform of the area was
prepared from Landsat TM. Soil texture and salinization of soil are based on the soil map.
Each of the above mentioned layers with associated attribute data was digitally encoded
in a GIS database to create thematic layers. Overlay operation on the layers produce a
resultant polygonal layer, each of which is a land unit with characteristics of the land.
Land suitability rating model applied to the resultant polygonal layer provided the
suitability classes for rice. The resultant suitability class were checked against the rice
yield which collected by the Department of Agriculture Extension. It was found to be
satisfactory.
The evaluation model is defined using the value of factor rating as follows:
Suitability = W x NAI x R x S x T.
Table 1. The suitability area for rice in the lowest Namphong Watershed, Northeast
Thailand
Suitability class Area (km2) %
Highly suitable 208.30 6.97
Moderately suitable 868.26 29.03
Marginally suitable 1265.47 42.32
Unsuitable 530.27 17.73
(Water body) 36.63 1.23
(Village) 81.48 2.72
Total 2990.41 100
The study provides an approach to identify parametric values in modeling the land
suitability for rice. The theme layers to be input in the modeling are assigned the rating
value as attribute data. Overall insight into the factors affecting the suitability of land can
be provided spatially and quantitatively. The result indicated that the highly suitable land
cover an area of about 208.3 km2 and is restricted to the irrigated areas with high NAI.
Some 17.73 percent of the watershed is unsuitable area for rice which corresponds to the
sloping land. It has become increasingly apparent that computer based GIS and remote
sensing data can provide the means to model land suitability effectively.
To assess the reliability of the methodology developed, the suitability classes were
checked against the rice yield. The rice yields in he study area, were on average 4171.87,
2968.75 and 2078.12 kg/ha for the unit of class generated S1, S2 and S3 respectively. For
moiré accurate results, average rice yields should be periodically collected, possibly 4-5
continues years. This will need further investigation to establish the resultant in relation to
rice yield.
In conclusion, with analysis of spatial modeling it is possible to assess the land suitability
with higher accuracy. In addition the modeling provided an approach to the improvement
of rice yield by enhancing the component of modeling input.
33 Raster Based Analysis, Map Algebra, Grid Based
Operations, Local, Focal, Zonal & Global Functions
Raster Based Analysis :-
Raster analysis is similar in many ways to vector analysis. The major differences between raster
and vector modeling are dependent on the nature of the data models themselves. In both raster
and vector analysis, all operations are possible because datasets are stored in a common
coordinate framework. Every coordinate in the planar section falls within or in proximity to an
existing object, whether that object is a point, line ,polygon, or raster cell.
Raster analysis, on the other hand, enforces its spatial relationships solely on the location of the
cell. Raster operations performed on multiple input raster datasets generally output cell values
that are the result of computations on a cell-by-cell basis. The value of the output for one cell is
usually independent of the value or location of other input or output cells. In some cases, output
cell values are influenced by neighboring cells or groups of cells, such as in focal functions.
Raster data are especially suited to continuous data. Continuous data change smoothly across a
landscape or surface. Phenomena such as chemical concentration, slope, elevation, and aspect
are dealt with in raster data structures far better than in vector data structures. Because of this,
many analyses are better suited or only possible with raster data.
GISs can display data in various formats but usually can only use data in a specific format
(e.g. ArcGIS can only analyze grids).
Raster analysis is based on the cell as the basic unit of analysis
– Can perform analysis on individual cells
– Can analyze data on a group of cells
– Can perform analysis on all cells within a grid
Analysis can operate on single raster grids or multiple raster grids
Data Analysis Environment
– Specifies the extent of the analysis area
– Specifies the cell size of the output grid
Mask Grid
– Can Also be used to define the area of analysis
Map Algebra :-
Like most of the analytical frameworks embodied in current GIS packages, map algebra is
primarily oriented toward data that are static. Each layer is associated with a particular moment
or period of time, and analytical capabilities are intended to deal with spatial relationships. In its
original form, map algebra was never intended to handle spatial data with a temporal component.
However, as the availability of spatio-temporal data has increased dramatically in recent years
due to the growth of satellite remote sensing and other technologies, and as the sophistication of
things such as video games and animation in the motion picture industry has raised popular
expectations for spatio-temporal processing capabilities there has also been an increasing
demand for the spatio-temporal extension of GIS.
Map Algebra
Map algebra is a cell by cell combination of raster layers using mathematical operations
– Addition, subtraction, division, max, min, virtually any mathematical operation you
would find in an Excel
Outgrid = grid1 * 2
Outgrid = sin(grid1)
Map
algebra and raster GIS is quite simple to visualize in a spread sheet. An example of
multiplication and addition
The use of arrays make map algebra and raster GIS very computationally efficient
But, be careful of :
Layers that are not coincident
Different cell sizes
Map algebra can be extended to performing a number of mathematical operations.
The computer will allow you to perform virtually any mathematical calculation.
For example, you can create a grid where water features are 0 and land values are 1.
Then, you can multiply this grid with an elevation map. The output will include 0’s
where water existed (x * 0 = 0), and the original elevation value where land existed (x * 1
= x)
Or, you can add the elevations and the grid with 0’s and 1’s together (but, it would be
meaningless!)
ArcGIS can deal with several formats of raster data. Although ArcGIS can load all supported
raster data types as images, and analysis can be performed on any supported raster data set, the
output of raster analytical functions are always ArcInfo format grids. Because the native raster
dataset in ArcGIS is the ArcInfo format grid, from this point on, the term grid will mean the
analytically enabled raster dataset.
Grid Layers
Grid layers are graphical representations of the ArcGIS and ArcInfo implementation of the raster
data model. Grid layers are stored with a numeric value for each cell. The numeric cell values are
either integer or floating-point. Integer grids have integer values for the cells, whereas floating-
point grids have value attributes containing decimal places.
Cell values may be stored in summary tables known as Value Attribute Tables (VATs) within the
info subdirectory of the working directory. Because the possible number of unique values in
floating-point grids is high, VATs are not built or available for floating-point grids.
VATs do not always exist for integer grids. VATs will exist for integer grids that have:
It is possible to convert floating-point grids to integer grids, and vice versa, but this frequently
leads to a loss of information. For example, if your data have very precise measurements
representing soil pH, and the values are converted from decimal to integer, zones which were
formerly distinct from each other may become indistinguishable.
Grid zones are groups of either contiguous or noncontiguous cells having the same value. Grid
regions are groups of contiguous cells having the same value. Therefore, a grid zone can be
composed of 1 or more grid regions.
Although Raster Calculations (which will be discussed shortly) can be performed on both integer
and floating-point grids, normal tabular selections are only possible on integer grids that have
VATs. This is because a tabular selection is dependent on the existence of a attribute table. Those
grids without VATs have no attribute tables, and are therefore unavailable for tabular selections.
There are a large number of basic grid operations supported for image and general raster files.
These include local, focal and zonal operations depending on the scope of the operation. Such
operations may be applied to a single grid, or to a number of input grids, depending on the
operation in question. The set of possible operations of this type are often referred to as Map
Algebra. Originally this term was introduced by Tomlin (1983) as the process of map
combination for co-registered layers with rasters of identical size and resolution. Combinations
involved arithmetic and Boolean operations. However the term is now used more widely by many
suppliers. For example, ArcGIS describes the set of all operations performed using its Spatial
Analyst option as “Map Algebra”. More specifically it divides such functions into five main
categories, the three above plus Global and Application-specific:
Local functions, which include mathematical and statistical functions, reclassification, and
selection operations
Focal functions, which provide tools for neighbourhood analysis
Zonal functions, which provide tools for zonal analysis and the calculation of zonal
statistics
Global functions, which provide tools for full raster layer or raster dataset analysis, for
example the generation of Cost Distance rasters
Application functions, specific to tasks such as hydrology and geometric transformation
Local Functions
Most grid operations perform their algorithm on every cell in the dataset. You can think of the
local function calculation engine as starting at once cell location, performing a calculation once on
the inputs at that location, and then moving on to the next cell location, and so on. Here is a global
function, where the individual output grid cell values are the result of the local sine function
performed on every input cell. Most of the functions that create new grids based on analyses
performed on vector layers are local functions.
Mathematical operations
Reclassification
Global Functions
Global functions perform operations based on the input of the entire grid. Functions such as
calculating distance grids and flow accumulation require processing of the entire grid for creating
output.
Focal Functions
Certain grid operations do consider neighborhoods, so that the output cell is the result of a
calculation performed on either a group of cells determined by a window of cells (known as a
kernel or focus) around the cell of interest. These operations are called focal functions. For
example, a smoothing (low-pass filter) algorithm will take the mean value of a 3-x-3 cell kernel,
and place the output value in the location of the central cell. If the kernel contains locations that
are outside of the grid, these locations are not used in the calculation.
In this focal mean example, the outlined cells in the input grid are averaged, and the resultant
value is placed in the center cell of the kernel in the output grid. This is done for every 3-x-3
neighborhood in the input.
Zonal Functions
Other operations perform functions based on a group of cells with a common value (a zone) in
one of the inputs. The group of cells is known as zonal functions, since they calculate single
output values for a group of cells based the location of the input zone.
Here, the zones are defined by the zone grid. The function is a zonal sum, which sums all
the input cells per zone, and places the output in each corresponding zone cell in the
output. The zone boundaries are included only for illustrative purposes, and are not
Vector Data:
Basic entities of a vector data are point, line, node, segment, and polygon.
Point: (x,y) coordinate pair, the basis of all higher order entities;
Line: a straight line feature joining two points.
Node: the point defining the end of one or more segments;
Segment: a series of straight line sections between two nodes;
Polygon: (Area, Parcel): an area feature whose perimeter is defined by a series of enclosing
segments and nodes.
Advantages Disadvantages
Data can be represented at its original The location of each vertex needs to be stored explicitly.
resolution and form without
generalization.
Graphic output is usually more For effective analysis, vector data must be converted into a
aesthetically pleasing (traditional topological structure. This is often processing intensive and
cartographic representation); usually requires extensive data cleaning.
Since most data, e.g. hard copy maps, is Algorithms for manipulative and analysis functions are
in vector form no data conversion is complex and may be processing intensive. Often, this
required. inherently limits the functionality for large data sets, e.g. a
large number of features.
Accurate geographic location of data is Continuous data, such as elevation data, is not effectively
maintained. represented in vector form. Usually substantial data
generalization or interpolation is required for these data
layers.
In a vector-based system topological map overlay operations are much more complex than the
raster-based case, as the topological data is stored as points, lines and/or polygons. This requires
relatively complex geometrical operations to derive the intersected polygons, and the necessary
creation of new nodes (points) and arcs (lines), with their combined attribute values.
In a vector-based system, topological map overlay operations allow the
polygon features of one layer to be overlaid on the polygon, point, or line features of another
layer. Depending on the objectives of the Overlay operation, different output features can result.
• In GIS, the normal case of polygon overlay takes two map layers and overlays them
• each map layer is covered with non-overlapping polygons
• If we think of one layer as "red" and the other as "blue", the task is to find all of the
polygons on the combined "purple" layer
• Attributes of a "purple" polygon will contain the attributes of the "red" and "blue"
polygons which formed it
o can think of this process as "concatenating" attributes
o usually a new attribute table is constructed that consists of the combined old
attributes, or new attributes formed by logical or mathematical operations on the
old ones
• Number of polygons formed in an overlay is difficult to predict
o there may be many polygons formed from a pair of "red" and "blue" polygons,
with the same "purple" attributes
• When two maps are overlaid, will result in a map with a mixture of 3 and 4 arc
intersections
o four arc intersections do not generally occur on simple polygon maps
Windowing
Buffering
• The process of building points, lines and areas from digitized "spaghetti"
• wherever intersections occur between lines, the lines are broken and a point is inserted
• The result is a set of points, lines and areas which obey specific rules.
Since vector-based topological map overlay operations involve overlaying the point, line, or
polygon features of one layer on the polygon features of another layer, the three following
processing algorithms are fundamental:
Point-in-Polygon
Line-in-Polygon
1. Point-in-Polygon Processing
Point features of one input layer can be overlaid on polygon features of another input layer, Point-
in-Polygon analyses identify the polygon within which each point falls. The result of a Point-in-
Polygon overlay is a set of points with additional attributes (i.e. those attributes of the polygon
which the point lies within).
The basic algorithm used to perform Point-in-Polygon analyses is detailed below:
Usually a minimum bounding rectangle for the polygon is defined within the system - by its
maximum and minimum coordinates. It is easy to determine if a point (or line end) is inside or
outside the rectangle’s extent. If the point lies outside the minimum bounding rectangle, then it
also must lie outside the polygon and the analyses is complete.
However if the point falls inside the minimum bounding rectangle then the following
further processing is required:
From the point, a line parallel to an axis is drawn (usually either the X or Y axis).This parallel line
extends from the point (or line end) to beyond the extremities of the polygon, with its direction
usually towards the highest values of this axis.
The system then counts the number of times this “half line” intersects with the polygon boundary.
If the result is an even number (or zero), then this indicates that the point lies outside the polygon.
If the result is an odd number, then this indicates that the point falls inside the polygon.
The Point-in-Polygon algorithm described above works very well for special cases of “island”
polygons, polygons with holes, and concave polygons.
2. Line-in-Polygon Processing
Polygon features of one input layer can be overlaid on lines (arcs) of another input layer. A line
can be made up of many segments, Line-in-Polygon analyses therefore identifies which polygon
(if any) contains each line or line segment. The result of a Line-in-Polygon overlay is a new layer
containing lines with additional attributes (i.e. those attribute of the polygon within which the line
falls). Sometimes a line segment falls directly on a polygon boundary instead of within the
polygon. In this special case, the additional line attributes will contain the attributes of both
polygons - lying to the left and right sides of the line.
As lines and polygons are both made up of line segments, Line-in-Polygon analysis requires the
determination of whether any of these overlaid line segments intersect. The task of determining
whether two line segments intersect consists of a simple mathematical calculation; however the
complexity of this operation is increased by the number of line intersection checks that need to be
made for a complete Line-in-Polygon overlay analysis. Therefore Geographical Information
Systems use the following algorithm to minimize the number of calculations required.
• Minimum bounding rectangles of both the line and the polygon are used to reduce the
number of computations required. If no intersection occurs then a check is made to
determine whether the minimum bounding rectangle of the line falls completely outside
the minimum bounding rectangle of the polygon (defined by the element’s minimum and
maximum coordinates). If this is the case then the line definitely does not lie within the
polygon, and the analysis is complete, otherwise the following further processing is
required:
• As the line may be made up of many line segments, each line segment has to be tested for
intersection or inclusion within the polygon. If the line segment lies outside the polygon
minimum bounding rectangle, then that segment also lies outside the polygon and can be
disregarded, otherwise the following processing must continue:
• Testing whether a line segment is totally inside a polygon or not can be difficult because
polygons can have concavities or holes within them, therefore it is not enough to simply
determine if both end-points of a line segment lie within the polygon. To deal with this
problem, the polygon and line segment are both rotated such that the line segment lies
parallel to one of the axis (X or Y).
• The next step uses the “half-line” test (as described in the Point-in-Polygon Analyses
theory above) along the axis parallel to the line segment to determine whether each
segment end-point is in or out and note all segment intersections with the polygon. Note
that half-line intersection points are not necessarily also segment intersection points.
• If the results of the half-line testing show that both points are in and there were no
segment intersections, therefore the whole line lies inside the polygon. Otherwise, if a
point starts outside, then that first part of the line segment is outside the polygon until the
first segment intersection point, the next part of the line segment is inside the polygon
until the next segment intersection point, and so on.
3. Polygon-on-Polygon Processing
This process merges overlapping polygons from two input layers to create new polygons in an
output layer. The result of a Polygon-on-Polygon overlay is an output layer containing new
polygons with merged attributes (ie. those attributes from each of the two overlapping polygons).
Note: As polygons are made up of line segments, Polygon-on-Polygon analysis requires the
determination of whether these overlaid line segments intersect. The processing for Polygon-on-
Polygon analysis is therefore essentially the same as for Line-in-Polygon analysis (as detailed in
the Line-in-Polygon theory above).
35. Spatial representian of geographic data in raster and
vector models
Abstract:
There is a growing need to move away from traditional interpretation of data analysis through
the manual mapping and manual data base management system of whose accuracy is suspected.
Map making and geographic analysis are not new, but a GIS performs these tasks faster and with
more sophistication than do traditional manual methods. A Geographic Information System (GIS)
is a computer-based tool for mapping and analyzing existent things and events that happen on
earth. GIS technology integrates common spatial database operations such as query and
statistical analysis with the unique visualization and geographic analysis benefits offered by
maps. Here is an attempt to explain the basic concepts on spatial data representation of
geographical features in vector and raster models which are important in understanding the
components of GIS.
Introduction:
Spatial data in GIS has two primary data formats: raster and
vector. Raster uses a grid cell structure, whereas vector is
more like a drawn map. Raster format generalizes the scene
into a grid of cells, each with a code to indicate the feature
being depicted. The cell is the minimum mapping unit.
Raster has generalized reality: all of the features in the cell
area are reduced to a single cell identity. The raster cell’s
value or code represents all of the features within the grid; it
does not maintain true size, shape, or location for individual
features. Even where “nothing” exists (no data), the cells
must be coded.
Vector format has points, lines, polygons that appear normal,
much like a map. Vectors are data elements describing
position and direction. In GIS, vector is the map-like drawing
of features, without the generalizing effect of a raster grid.
Therefore, shape is better retained. Vector is much more
spatially accurate than the raster format.
Raster Model:
All spatial data models are approaches for storing the spatial
location of geographic features in a database. Vector storage
implies the use of vectors (directional lines) to represent a
geographic feature. Vector data is characterized by the use of
sequential points or vertices to define a linear segment. Each
vertex consists of an X coordinate and a Y coordinate.
Raster structures may lead to increased storage in certain situations, since they store each cell in
the matrix regardless of whether it is a feature or simply '
empty'space.
The size of cells in a tessellated data structure is selected on the basis of the data accuracy and the
resolution needed by the user. There is no explicit coding of geographic coordinates required
since that is implicit in the layout of the cells. A raster data structure is in fact a matrix where any
coordinate can be quickly calculated if the origin point is known, and the size of the grid cells is
known. Since grid-cells can be handled as two-dimensional arrays in computer encoding many
analytical operations are easy to program. This makes tessellated data structures a popular choice
for many GIS software. Topology is not a relevant concept with tessellated structures since
adjacency and connectivity are implicit in the location of a particular cell in the data matrix.
Several tessellated data structures exist, however
only two are commonly used in GIS' s. The most
popular cell structure is the regularly spaced matrix or
raster structure. This data structure involves a division of
spatial data into regularly spaced cells. Each cell is of the
same shape and size. Squares are most commonly utilized.
Vector Model:
ector advantages:
1. In general, vector data is more map-like.
2. Is very high resolution.
3. The high resolution supports high spatial
accuracy.
4. Vector formats have storage advantages.
5. The general public usually understands what
is shown on vector maps.
6. Vector data can be topological.
Vector disadvantages:
1. May be more difficult to manage than raster
formats.
2. Require more powerful, high-tech machines.
3. The use of better computers, increased management needs, and other considerations often
make the vector format more expensive.
.
36 Network Analysis – Concepts & Evaluation
Network Analysis:-
With ArcGIS Network Analyst, users can model real-world transportation networks and solve
routing problems within ArcGIS Desktop, ArcGIS Server, and ArcGIS Engine. This seminar
introduces the ArcGIS 9.1 Network Analyst extension and network dataset. The presenter
demonstrates how the extension solves various problems, such as finding the best route or
closest facility with travel directions, determining service areas, and generating origin-
destination cost matrixes. In addition, the presenter explains how to create simple and multi-
modal network datasets to support various types of network analysis.
Network Analysis
*Connectivity tracing
*Cycle detection
*Isolation tracing
Basically, travelling for a long distance trip, we need the small scale map for planning the travel
route. The route may pass many cities. We need larger scale map to find an optimal route of the
cities. The small scale network is analyzed first to get the route. The inbound and outbound roads
of each city are identified while analyzing. Then the interesting city network analysis is performed
using the same inbound and outbound road as origin and destination respectively. Together with
the dynamic network analysis and road’s variable traffic speed, we simulate the real situation of
our route.
Integration of 2 scales road network analysis (country and city)
In Thailand, the road sign is not as good as in the developed country. When we want to travel
between 2 cities, we can use GIS network analysis software to find the best route. However, the
selected routes will pass several cities and towns. The problems will start when you enter towns
and cities which do not have good road signs. The result is you may loose the orientation or
direction in them.
Our solution is to do 2 levels network analysis, the first level is country road network, the second
level is cities road network. The linkage between these 2 road networks are created. When the
first level analysis is done. Users can select the town or city which are on the analyzed route, to do
additional network analysis in city level. The city road network analysis is automatically handled
by software.
The software is written by Visual Basic. GIS component is MapObjects. The Network Analysis
module is developed by using Dijkstra’s Algorithm.
Two levels of network layers must be considered here including country level network model and
city level network model. However, the network data model of these 2 levels are identical. The
details of each level layers and network data model are as followings:
Country network database is created from the map of 1:1,000,000 highway map of Thailand. This
map shows all highway roads and the position of the cities. The road network layer of country
level is captured in Shapefile format. Within the application, the node layer and Node-Arc
topology are built in order to create the relationship between nodes and lines. The unique ID of
each road and node are calculated as well. All information of Node-Arc topology and ID of 2
layers are stored in the Shapefile.
Each node of this level of network can be either an intersection or the city. These types of node
must be specified in the attribute table. In case of the city, the city code must be entered. The
database structure is illustrated in figure 1.
City level layers are digitized from the larger scale map. It illustrates all streets in the city
including an inbound and outbound highway roads. The 2 main layers are the street network and
intersections. The Node-Arc topology in the network must be built before performing the analysis
just like in the country level network. There is an item of Highway ID that contains the unique ID
of the country level network ID for every inbound and outbound road.
In this level of network, the node layer that represents all intersections in the city contains only
the unique ID. The figure 2 illustrates database structure of these 2 layers.
The name of the layers are defined as CityCode_road and CityCode_int for street and intersection
layer respectively.
There are many ways to create the linkage between these 2 levels of network. In our research, we
use the simplest way. As mentioned above that a city code is kept in the attribute table of country
level node so that user can pick up the city node in the analyzed route, the city code of the
selected node can be retrieved. The application uses this city code in accessing the right city
network layers because the city network layers’ name starts with the city code and are followed
by the suffixes of _road and _int for street and intersection respectively.
Moreover the unique ID of the country level network is recorded in the inbound and outbound
roads of the city level. The application recognizes the inbound road and outbound road of the
selected city in the country level network. This recognition is useful for automatically performing
the network analysis in the city level by using the same inbound and outbound roads.
The figure 3 illustrates the linkage between country and city level networks.
Network Model
The network topology data model can be built within the application. This topology data model
describes the relationship of nodes and edges. The travel costs data must be specified. There are 2
kinds of travel cost. The first one is the cost for traveling on each edge. The second one is the cost
for turning at intersection or passing through a city. The travel cost of each edge and turn is varied
due to the time of day. User can use either the default cost or user-defined cost.
The origin node must be selected first. The departure time is then specified. The network is
analyzed by using Dijkstra’s Algorithm together with the travel and turning costs of the arrival
time at each road and intersection. This analysis creates the data structure to keep the optimal path
from an origin to each intersection and the total cost of travel from origin to each intersection.
After selecting the destination node, the optimal path is created and travel cost and travel time are
derived.
Both country and city level networks use this network data model when building and analyzing
the network. The only difference is that an origin node, destination node, and departure time of
city level network are derived automatically from the analysis of country level network. The
departure time of city level path is the arrival time of the city node.
Software Architecture
The application software comprises of 3 main modules. These modules are the Network
Topology, Network Analysis module and Network Editor module. Both levels of network can be
processed by using these 3 modules.
Network Topology
The Network Topology module is used for building Arc-Node topology and creating node layer.
Then the network topology is loaded into memory for network analysis. The default edge travel
cost can be loaded by using this module.
Network Analysis
The purpose of Network Analysis module is for selecting the origin node, specifying the
departure time, analyzing the network, selecting the destination node, and creating the optimal
route. After having the optimal route, the city node is selected in order to load the city network
layers, derive necessary information , and calculate the optimal path for the city network.
Network Editor
Network Editor module includes 3 submodules. These submodules are Network creation, Linkage
of networks creation, and Network parameter editor.
In our research, both levels of network were already created in ArcInfo. The ArcInfo coverages
are then converted into Shapefile format.
To create the linkage of networks, ArcView GIS is used to get the unique ID of all inbound and
outbound road of each city node. Then the highway field value of inbound and outbound road of
city level network is filled in.
Basically, the network parameters include the traffic speed along the road and turning cost. The
network parameter editor lets user edit the traffic speed and turning cost in both levels of network.
Example
The integration of 2 scales road network analysis is practical. Travelers can make an optimal route
plan in a small scale country map. Then they further their analysis within a large scale city map in
order to understand the street pattern and analyze an optimal path from inbound road to outbound
road. With the Network Parameters Editor tools, travelers simulates the virtual road networks of
country and city levels by updating traffic speed and turning cost. However collecting the traffic
speed and turning waiting time of each hour for all roads and intersection is quite difficult.
Another problem is defining a linkage between 2 levels of network. Typically, almost all of city
level maps in Thailand have a poor quality in term of orientation, measurement, and accuracy so
that it takes time to find the identical roads in a country map and city map. If the traffic speed,
turning waiting time, and linkage of networks are completed, the combination of two network
models are an efficient and practical model of the real world road network.
Key Features
Net Engine is versatile and has been designed to facilitate advanced network analysis in
several ways.
* Data structures and methods that are optimized for fast retrieval of network connectivity
* A way to efficiently store network data structures to a permanent disk file
* Ready-to-use algorithms such as the shortest path algorithm
* Support for some advanced modeling concepts that facilitate modeling hierarchical and
multimodal transportation networks
* A specialized memory management module that makes efficient use of computer memory
necessary for very large networks
* Support for databases from commercial suppliers, such as Etak, Tele Atlas, and NAVTEQ, as
well as an organization's internal network data sets
* An interface to MapObjects which combines an ActiveX control and more than 45
programmable ActiveX automation objects
* Deployment license options for both stand-alone client- and network server-based systems
37. Network analysis:- C-matrices for evaluating connectivity
of the network
In the real world, objects are connected to each other. Using GIS in support of network utility
management typically involves many types of features that may have connectivity to each other.
Several GIS vendors have developed GIS software whose potential functions can provide for
network management and analyses, but each system has a proprietary format to deal with the
connectivity between geometry or features. Topology in GIS is generally defined as the spatial
relationship between such connecting or adjacent features and is an essential prerequisite for
many spatial operations such as network analysis. There are, in general, three advantages of
incorporating topology in GIS databases: data management, data correction and spatial analysis.
Topology structures provide an automated way to handle digitizing and editing errors, and enable
advanced spatial analyses such as adjacency, connectivity and containment.
Network: a number of people or places among which there are one-on-one interactions –
friendships, airline routes, telephone calls, automobile trips, roads.
1. Beta Index: Compares the number of links with the number of nodes in a network
2. Gamma Index: Compares the actual number of links with the maximum number
3. Alpha Index: compares the number of actual (fundamental) "circuits" with the maximum
number of all possible fundamental circuits
4. Associated Number (Koenig Number): measures the centrality of a node by the number
of links needed to connect this node with the (topologically) most distant node in the
network
5. Shimbel Index: Measure of the minimum number of links necessary to connect one node
with all nodes in the network
6. Diameter of a network: Number of links in the shortest path between the furthest pair of
nodes.
7. Nodal Degree: The sum of the (direct) links which connect a node to adjacent nodes. It
can be calculated by summing rows or columns of (direct) connection matrix. CONS:
Lack of indirect links.
How can we systematically select the shortest distance routes between two points say Oi and
Dj? (Note that for transportation-system modeling or for an individual's decision making, we
need to know alternative routes, ordered according to distance).
The connectivity matrix abstracts the network as a table, with each possible node (vertex)
presented as a row (an origin) and as a column (a destination). Each possible direct link between
an O and a D is presented as a “1” in the appropriate cell; if there is no direct OiDj link, a “0”
appears in the appropriate cell. Thus, network graphs or matrices are two formats for topological
data (TOPOLOGY: connections and distances among spatial data elements). Within GIS,
including topology implies providing the GIS with a spatial data matrix of explicit connections
among points, adjacencies among areas, etc.
For a simple system, the matrix tells you nothing you can’t see in the graph. The matrix is
necessary for three reasons:
1. For a very complex system, such as all the OD connections in a 300-by-300 urban
transportation plan or the diagram of interchanges in the Interstate highway system, the
matrix will instantly give you information you couldn’t easily compile from the graph.
2. Computers cannot “see” graphs, but they’re very good at reading lots of zeros and
ones very quickly.
3. We can perform simple matrix algebra on the matrices, to derive very powerful results.
If we multiply the matrix above by itself, yielding C2 (or a squared connectivity matrix), we have
a new matrix that tells us the number of two-edge routes from each O to each D.
• A simple way to approach this is through our graph, this time adding distance or time
values (we call this a valued graph of a network).
• An alternative is an L matrix, with time or distance entered in the cells rather than
“yes/no” to a connection. In the example below, note that we insert “zero” along the
principal diagonal and “infinity” in cells where there is no direct connection.
We can derive a total valued matrix by adding the L matrix to it, repeatedly until we cover the
diameter of the network. This tells us the minimum distance (in time or ground distance)
between each O and each D: a very important piece of information.
These measures are useful in several ways.
1. They provide us with the dij we need for any kind of transportation modeling.
2. They show us the minimum distance between any O and D, expressed in number of links, in
time, or in ground distance.
3. They allow us to understand how an additional edge (link) or the removal of an edge will affect
the accessibility of a node and the connectivity of the network.
How can a GIS make use of this insight to construct minimum-distance routes?
The network of possible routes is entered into the GIS as a topological matrix: what nodes link
directly to what nodes.
The distances don’t have to be explicitly entered, because the GIS has the actual location of each
node; it can calculate the distance along each direct link.
• Identify whether the origin and destination share a direct link, and if so, assigning the
route to that link.
• If the origin and destination don’t share a direct link, then identify the first link along the
route, store that, then identify the link from the second node to the desired destination,
and on and on.
• Identify the shortest link from the origin.
• Is this the desired destination? If yes, record the route. If no, use the shortest link from
this intermediate node.
• Is this the desired destination? If yes, record the route. If no, use the shortest link from
this intermediate node.
• And on, until we’ve arrived at the desired destination.
• This minimum branching tree algorithm is not guaranteed to find the closest total route
link.
In either case, the GIS can link multiple destinations, to establish the minimum distance for a
delivery route, such as we’ll be doing in the first case. We’ll come up with a set of customers,
and develop a route among them.
a) A diameter is “the maximum number of steps required to move from any node to any other
node through the shortest possible routes within a connected network.” or “the number of linkages
needed to connect the two most remote nodes on the network.”
b) "An algorithm is a set of mathematical expressions or logical rules that can be followed
repeatedly to find the solution to a question".
c) Each cell (x,y) in a new matrix (AB) which is the product of two other matrices (A and B), is
the sum of:
• the product of the first cell in the Xth row of the matrix A times the first cell in the Yth
column of matrix B, plus
• the product of the second cell in the Xth row of matrix A times the second cell in the Yth
column of matrix B, plus
• the product of the third cell in the Xth row of matrix A times the third cell in the Yth
column of matrix B;
• and on, until we've exhausted the length of the Xth row in matrix A and the length of the
Yth row in matrix B.
Note that the rows of matrix A must have the same length as the columns of matrix B. In this
simple network analysis, we're multiplying a square matrix by itself, so that's not a problem.
In a connectivity matrix, cell (3,1) is 0 if there's no direct link between 3 and 1; 1 if there is. Cell
(1,4) is 0 if there's no direct link between 1 and 4; 1 if there is. What does the product of cell
(3,1) and cell (1,4) tell us? Why would we be interested in adding this product to the product of
(3,2)(2,4), to the product of (3,3)(3,4), to the product of (3,4)(4,4), to the product of (3,5)(5,4)?
Network Connectivity
• Topology
Topology is the common term used to describe physical connectivity between features.
Topology is generally represented by links and nodes. A feature instance is connected to
another feature instance via a connection point. This connection point is described by a
node, and the path between two nodes is described by a link. Topology is derived from
the underlying geometry.
Link and Node model
There are two common properties for the link: cost and direction. Cost is the value which
is taken into account to find the best path. Commonly the cost is the distance of the link
which is adequate for most simple network analysis problems. Direction is used for
specifying which direction the network can travel on that link. There are also two
properties for the node: in/out cost and degree. In/out cost is the accumulated distance
from the starting point that used to find the next distance value at another node of the
same link. Node degree presents the number of links associated with it.
• Directional network
For some applications topological features require direction as well as connection. If we
consider the flow of water in a river, the topology must be modelled to take into account
the flow direction of the water. However for other applications such as analysing boat
traffic on a river it is more sensible to model the network as non directional or two-way.
Moreover in a road network, if we consider the road feature, it may be one way or two
way and as is the case in some cities it may change depending on the time of day etc..
Thus there are requirements to be able to model the direction of connectivity whilst
retaining flexibility to suit the application in question.
There are several ways to handle the directional flow of a network. Some systems use a
special feature to set the directional flow of the link, whereas other systems set the
directional flow in the application using additional coding. This research sets the
directional flow as a property of a line and provides the database structure for the
directional network as a directed line. A Link feature derived from the directed line is a
directed link.
• Connectivity types
In order to model real world complexity we also need to be able to express the concept of
different types of connectivity. Whilst it may be acceptable to allow road features to
connect if they share the same 2D space, it is not appropriate for all situations e.g. fibre
optic cables, water mains etc…
To enable the different types of feature connectivity, we need to model the three ways to
connect two link features: end-connection, middle-connection and cross-connection, and
the two ways connecting link features to node features: end-connection and middle-
connection.
Connectivity types
Network Family
In the real world there are natural groupings of objects; the various types of roads and paths
that make up the road network; rivers, streams, canals, lakes etc. that make up the natural
water network; high voltage cables, low voltage cables and transformers etc. that make up an
electrical network. With some major exceptions these “families” of objects do not
topologically connect with features of other families. The concept of a “network family” is
used for establishing the various rules of connectivity between feature types. Features that do
not belong to the family cannot connect. This mechanism also provides a simple visual means
for the modification of specific connectivity rules and also provides a method for dealing with
semantic issues e.g. “street” and “strasse” can both be mapped onto the network family
feature “road”.
A family contains a collection of real-world features that may have connectivity to each other
in the same network. The example for a simple road network family is shown below.
Road Family
A matrix representing the connectivity is shown in Figure 1. The first row and column is a
list of line type features that may have connectivity. The inner cells show the Point type
feature that facilitates connectivity between them.
The family could also be shown as tree structure by setting the root feature. The view of
the tree structure varies depending on the root selected. However the relationship between
features is still the same. The example of tree structure is show as Figure 2.
Network analysis across two families may be required for some applications, e.g. a route
planning application may require movement between the road and the rail network
families. The network can trace across families if there is a common point connection
feature in both families. For instance, a rail station is in both the “road” and the “rail”
families and therefore a trace can cross between them via a rail station.
38 Network Analysis & Network Data Model
Introduction:-
Networks are an integral part of our daily lives. We drive cars from home to work on a street
Network. We cook our dinner with natural gas or electricity that is delivered through networks of
utility lines. We catch up on the news and send e-mail through the Internet, the largest wide area
network.
Network dataset contains network elements (edges, junctions, and turns) that are generated from
simple point and line feature.
Edges are generated from linear features and are connected by junctions. Edges in The network
dataset are bi-directional.
Junctions are generated from point features. They connect edges and facilitate navigation. A
junction may be connected to any number of edges.
Turns are generated from line features or turn tables and describe transitions between Edges
Turns
Networks have two parts: physical network and the logical network.
The physical network consists of the data layers used to generate a network and provides the
features to generate network elements.
The logical network consists of a collection of tables that models network connectivity and
references network element relationships.
Network Analyst
Provides a rich environment with easy-to-use menus and tools as well as the robust functionality
available in the Reprocessing environment for modeling and scripting
Networks are typically either directed flow networks or undirected flow networks. In a directed
flow network, the flow moves from a source toward a sink and the resource moving makes no
travel decisions (e.g., river system).
In an undirected flow system, flow is not entirely controlled by the system. The resource May
make travel decisions that affect the result (e.g. traffic system)
Source
Sink
•Utility industries
•Transit agencies
•Drive-time analysis
•Point-to-point routing
•Route directions
•Optimum route
•Closest facility
•Origin-destination analysis.
Data model for transportation are interconnected hardware, software, data, people, organizations
and institutional arrangements for collecting, storing, analyzing and communicating particular
types of information about the earth.
Transportation entities have obvious physical descriptions but can also have logical
Relationships with other transportation entities.
Second, entities exist both in the real world and in the database or virtual world. The relationships
between the physical and logical realms are often one-to-many, creating database design
complexities.
Case studies:
In India, Nearly 50% of 6 lakh villages have road access. The Government of India has committed
to provide full connectivity under special programme known as Pradhan Mantri Gram Sadak
Yojana (PMGSY).
GIS Based Approach: The various data items required for the development of a comprehensive
rural road planning and development can be broadly categorized under three categories
(1) Village data (the name and code number), demographic data (population) and Infrastructure
data
(2) Rural road data like (Road reference data, Road geometric details, Road pavement condition,
Terrain and soil type
(3) Map data The map at block level should be prepared at 1:50,000 scale (Location of
habitation/settlements, Boundaries Road Network Water bodies (ponds, lakes, etc) Rivers and
Irrigation canals
Database developed above has been applied in the Rupauli Block in Purnia District of Bihar
Figure :Optimum Network of Rupauli
• The Village and Road Information System (V&RIS) developed under GIS environment is very
much useful for problem identification, planning, allocation of resources and location of various
socio-economic facilities for an integral rural development
• It is also useful for creation, maintenance and accessing the GIS database
• Further using the information available at the road network layer, it will be easy to estimate the
construction cost of selected links
The system comprises an 18-reservoir network with both serial and parallel Interconnections, as
well as extensive water transfer and conveyance subsystems.
The primary purpose of this complex water resource system is to provide drinking water for the
country's urban and rural areas, irrigation and industrial water supply, flood and low flow
management, and hydropower generation.
GIS Applications For Water Supply
Muskingum County GIS department) Ohio, has over 10,000 existing systems of record and over
300 new systems are installed each year.
Figure: Parcel information with land contours, roads, and soil types displayed.
•The GIS allows sanitarians to perform sewage treatment system reviews of existing systems in
minutes
•GIS as a visual tool, sanitarians can now have detailed phone consultations with property owners.
•It allows sanitarians to quickly utilize geographic information critical to decision making, and
eliminates the need to refer to cumbersome printed maps
39 Methods for evaluating point clutter: Random and Cluster
The science of geography attempts to explain and predict the spatial distribution of human
activity and physical features on the Earth’s surface. Geographers use spatial statistics as a
quantitative tool for explaining the geographic patterns of distribution.
The term spatial pattern often refers to various levels of spatial regularity, which often include
local clusters of points, global structure of a surface, etc.
Patterns of points
Agglomeration or grouping:
Suppose theory suggests that a particular set of objects (plants, animals, people, towns, etc.)
tends to group or agglomerate in certain ways. Point Pattern Analysis is helpful in measuring
various characteristics of the groups (size, spacing, density, etc.) and leads to the testing of
hypotheses derived from theory. For example studies of animal behavior suggests that certain
types of spatial patterns help to verify theories of territoriality and social organization.
Diffusion:
Many theories have been proposed for the way individuals or ideas spread or spatially
multiply. Point pattern analysis can be helpful in verifying the existence of a diffusion process
and in calibrating rates of changes. An example comes from the study of the spread according
to principles based on the nearness of possible communities and their resistance to accepting
ideas. By the analysis of pattern at various moments in time and in different environments.
These notifications can be tested.
Competition:
It is often desirable to investigate spacing characteristics when it is suspected that competitive
forces are at work. Sometimes competition yields maximum spacing and other times
grouping. A well known example comes from the literature on town spacing. Spatial aspects
of economic theories of marketing can be tested by point pattern analysis.
Segregation or associations:
Hypotheses about the existence of spatial segregation in a many-species population of
individuals can be tested with point pattern analysis. Following urban rent theory we may
expect two kinds of land uses to repel each other. This expectation can be tested, as well as
theoretical expectations of an association among several land uses.
Pattern change
Many theoretical statements deal directly with the manner in which pattern change. For example
the birth and death process of plant and animal populations as well as human populations may
very well be studied by point pattern analysis. Interest might be in the rates of change in patterns.
2. Theresult depends on the definition of the region in which points are distributed.
4. Those limitations are quite similar to those of the nearest neighbor distance method.
Consequently, one solution is to try various cell sizes and interpret the result as a function of
the spatial scale represented by the cell size.
Kernel density estimation
This method counts the incidents in an area (a kernel), centered at the location where the estimate
is made. This analysis is a partitioning technique, meaning that incidents are partitioned into a
number of different clusters. Oftentimes the user is able to specify the number of clusters. In some
forms of this analysis, all incidents, even the outliers, are assigned to one and only one group.
This method is very good for analyzing the point patterns to discover the Hot Spots.
1. This method provides us with a useful link to geographical data because it is able to transform
our data into a density surface.
2. Our choice of r, the kernel bandwidth strongly affects our density surface.
3. Also, we can weight these patterns with other data – such as density of populations and
unemployment rates.
4. In Dual Kernel Estimates, you are able to weight the estimates against another set of
incidents. For instance you might want to analyze the number of assaults against
establishments that are allowed to serve liquor.
GCP’s refers to physical points on the ground whose ground positions are known with
respect to some horizontal coordinate system and/or vertical datum
Any point which is recognisable on both remotely sensed images, maps and aerial
photographs and which can be accurately located on each of these. This can then be used
as a means of reference between maps or, more commonly, between maps and digital
images. Often used in the geometric correction of remotely sensed images and surveying.
History:-
Ground control has been established through ground surveying techniques in the form of
triangulation, trilateration, traversing, and leveling.
Currently, the establishment of ground control is aided by the use of GPS procedures.
• When mutually identifiable on the ground and on a photograph, these can be used to
establish spatial position and orientation of a photograph relative to the ground at the
instant of exposure.
• They are normally used to associate projection coordinates with locations on a raw
(uncorrected) image; however, they can theoretically be used to relate locations in any
two georeferencing systems:
normally raw image coordinate and some projection system
Number of GCPs:-
A ground control point segment contains up to 256 ground control points. 45 points has to be
selected for each scene. GCP’s should be selected on panchromatic data.
GCPs distribution:-
• GCPs should be uniformly selected in the scene- select points near the edges of the image
and with even distribution in the image.
• GCPs selection should also respect terrain variations in the scene- select point at both
highest and lowest elevations.
GCPs locations:-
• Cultural features is usually best point to use as GCP. It covers road and railroads
intersections, river bridges, large low buildings (hangars, industrial buildings, etc),
airports etc.
• Line features should have well defined edges. GCP has always to be selected as a center
of the intersection. To use this intersection as GCP the two line features forming the
intersection have to cross with the angle larger the 60 degrees.
• Natural features are generally not preferred because of their irregular shapes. If an natural
feature has well defined edges, it may be used as a ground control point. It could be forest
boundaries, forest paths, forest clearings, river confluence, etc. During such points
selection it has to be taken into account that certain boundaries can be subject to
variations (forest, water bodies) and may be different on images and maps.
• Applying of local enhancements can be very useful for exact image position definition of
the GCP.
1) After photography – ensuring that the points are identifiable on the image.
2) Before photography – control points may be premarked with artificial targets. Crosses
that contrast with the background land cover make ideal control point markers. Their size
is selected in accordance with the scale of the photography to be flown and their material
form can be quite variable.
eg. Markers painted on contrasting sheets of Masonite, plywood, or heavy cloth.
Overlapping areas:-
Identical GCPs should be selected in the areas where two or more Landsat scenes overlap. Such
points will have the same X,Y,Z coordinates and will differ only in corresponding image
coordinates.
• copy of the part of paper map showing selected point and its surrounding. or
• image chips from scanned map showing selected point and its surrounding. or
• written description or sketch of the point
Each ground control point has the following values associated with it:-
• Id: A unique numeric identifier for the control point. If it is negative, it is interpreted as
indicating that the point is a check point, and should not contribute to the transformation
model.
• System 1 X: The X coordinate in the first georeferencing system. This is normally a pixel
location in the image.
• System 1 Y: The Y coordinate in the first georeferencing system. This is normally a line
location in the image.
• System 1 Elevation: The elevation of the location in the first georeferencing system. This
is normally zero, and ignored by applications.
• System 2 X: The X coordinate in the second georeferencing system. This is normally a
location in projection coordinates.
• System 2 Y: The Y coordinate in the second georeferencing system. This is normally a
location in projection coordinates.
• System 2 Elevation: The elevation in the second georeferencing system. This should be
zero if it is not used.
Format:-
Accuracy:-
Flight Planning:-
2) PURPOSE OF PHOTOGRAPHY
compilation of topographic maps in a stereoscopic plotting instrument
Requirements:
• Good Metric Quality Photos: Calibrated Cameras And Films (High-resolution)
• Favorable B/H Ratio
3) PHOTOGRAPHIC SCALE
• Scale of Final Map produced
• Contour interval
• Capabilities of the stereo-plotting instruments
• Enlargement ratio (usually 5x)
• Variation of scale due to ground elevation
4) FLYING HEIGHT
a) Given: focal length of a camera lens a compilation scale of the map:
Necessary flying height can be calculated.
C-Factor =
_flying height__
contour interval
Flying Height = Contour Interval x C-factor
C-Factor (of instruments): 750-250
5) COVERAGE: ENDLAP AND SIDELAP
7) WEATHER CONDITIONS:
This is beyond the control of even the best planner. Only a few days of the year are ideal for aerial
photography. In order to take advantage of clear weather, commercial aerial photography firms
will fly many jobs in a single day, often at widely separated locations.
To eliminate most of the errors that might be occurred in the future, using software for
calculations and preparing all plans in digitally are considered as the best method.
Eg. Flight planning software.
Based on the above parameters the Mission Planner prepares computations and a flight map that
indicate to the flight crew:
1. flying height above datum from which the photos are to be taken
2. location,direction & number of flight lines to be made over the area to be photographed
3. time interval between exposures
4. number of exposures on each flight line
5. total number of exposures necessary for the mission
Flight plans are normally portrayed on a map for the flight crew. However, old photography, an
index mosaic, or even a satellite image may be used for this purpose.
OVERLAP SIDELAP
The computations prerequisite to preparing a flight plan are given in the following
example:-
A study area is 10 km wide in the east-west direction & 16 km long the north-south direction. A
camera having a 152.4 mm focal length lens & a 230mm format is to be used. The desired photo
scale is 1:25,000 and the nominal endlap & sidelap are to be 60% & 30%. Beginning and ending
lines are to be positioned along the boundaries of the study area. The only map available for the
area is at a scale of 1:62,500. This map indicates that the average terrain elevation is 300m above
datum. Perform the computations necessary to develop a flight plan.
Solution:-
a) Use north-south flight lines to minimise the number of lines required and consequently
the number of aircraft turns and realignments necessary.
Flying in a cardinal direction often facilitates the identification of roads, section lines, and
other features that can be used for aligning the flight lines.
b) Find the flying height above terrain and add the mean site elevation to find flying height
above mean sea level:
H = f + havg = 0.1524m + 300 m = 4110 m
S 1/25000
c) Determine ground coverage per image from film format size and photo scale:
Coverage per photo = 0.23 m = 5750 m on a side
1/25000
d) Determine ground separation between photos on a line for 40% percent advance per photo
(i.e. 60% endlap):
0.40 x 5750 m = 2300 m between photo centers
f) Because the intervalometer can only be set in even seconds, the number is rounded off.
Considering 60% coverage recalculate the distance between photo centers, using reverse
of the above equation
51 sec/ photo x 160 km/hr x 1000m/km = 2267 m
3600 sec/hr
g) Compute the number of photos per 16 km line dividing this length by the photo advance.
Add one photo to each end round the number up to ensure coverage:
16000 m/line + 1 + 1 = 9.1 photos/line
2267 m/photo
Use 10 photos
h) If the flight lines are to have a sidelap of 30% of the coverage, they must be separated by
70% of the coverage:
0.70 x 5750 m coverage = 4025 m between flight lines
i) Find the number of flight lines required to cover the 10 km study area width by dividing
this width by distance between flight lines. This division gives number of spaces between
flight lines; add 1 to arrive at the number of lines
10000 m width + 1 = 3.48
4025 m/flight line
Use 4 numbers
The main goal of planning is finding out the best fit flight lines and camera exposure stations. In
order to cover the project area with minimum number of models, flight lines and camera exposure
stations must be planed carefully. This is also important for a safety flight, reducing aerial survey
operational
costs and speeding up the preparation and execution of the photo missions and flight.
41Global Positioning System: Concept, Coordinates & Types
Concept:-
As the name suggests, global positioning system or GPS is used for tracking the position of a
respective object with the help of signals send by the object. Utilizing a constellation of at least 24
medium earth orbit satellites that transmit precise microwave signals, the system enables a GPS
receiver to determine its location, speed/direction and time. The GPS provides a continuous three
dimensional positioning 24 hrs a day throughout the world. The Global Positioning System (GPS)
is a burgeoning technology, which provides unequalled accuracy and flexibility of positioning for
navigation, surveying and GIS data capture. Developed by the United States Department of
Defense, it is officially named NAVSTAR GPS.
The first mode of positioning is known as point positioning, the second as relative positioning. If
the object to be positioned is stationary, it as static positioning. When the object is moving,it is
called kinematic positioning. Usually, the static positioning is used in surveying and the kinematic
position in navigation.
The GPS uses satellites and computers to compute positions anywhere on earth. The GPS is based
on satellite ranging. That means the position on the earth is determined by measuring the distance
from a group of satellites in space. The basic principle behind GPS are really simple, even though
the system employs some of the most high-tech equipment ever developed. In order to understand
GPS basics, the system can be categorised into
To compute a positions in three dimensions there should be four satellite measurements. The GPS
uses a trigonometric approach to calculate the positions. The GPS satellites are so high up that
their orbits are very predictable and each of the satellites is equipped with a very accurate atomic
clock.
Components of a GPS
The Control Segment consists of five monitoring stations (Colorado Springs, Ascesion Island,
Diego Garcia, Hawaii, and Kwajalein Island). Three of the stations (Ascension, Diego Garcia, and
Kwajalein) serve as uplink installations, capable of transmitting data to the satellites, including
new ephemerides (satellite positions as a function of time), clock corrections, and other broadcast
message data, while Colorado Springs serves as the master control station. The Control Segment
is the sole responsibility of the Department of Defense(DOD) who undertakes construction,
launching, maintenance, and virtually constant performance monitoring of all GPS satellites.
The DOD monitoring stations track all GPS signals for use in controlling the satellites and
predicting their orbits. Meteorological data also are collected at the monitoring stations,
permitting the most accurate evaluation of tropospheric delays of GPS signals. Satellite tracking
data from the monitoring stations are transmitted to the master control station for processing. This
processing involves the computation of satellite ephemerides and satellite clock corrections. The
master station controls orbital corrections, when any satellite strays too far from its assigned
position, and necessary repositioning to compensate for unhealthy (not fully functioning)
satellites.
The Space Segment consists of the Constellation of NAVASTAR earth orbiting satellites. The
current Defence Department plan calls for a full constellation of 24 Block II satellites (21
operational and 3 in-orbit spares). The satellites are arrayed in 6 orbital planes, inclined 55
degrees to the equator. They orbit at altitudes of about 12000, miles each, with orbital periods of
12 sidereal hours (i.e., determined by or from the stars), or approximately one half of the earth's
periods, approximately 12 hours of 3-D position fixes. The next block of satellites is called Block
IIR, and they will provide improved reliability and have a capacity of ranging between satellites,
which will increase the orbital accuracy. Each satellite contains four precise atomic clocks
(Rubidium and Cesium standards) and has a microprocessor on board for limited self-monitoring
and data processing. The satellites are equipped with thrusters which can be used to maintain or
modify their orbits.
The user segment is a total user and supplier community, both civilian and military. The User
Segment consists of all earth-based GPS receivers. Receivers vary greatly in size and complexity,
though the basic design is rather simple. The typical receiver is composed of an antenna and
preamplifier, radio signal microprocessor, control and display device, data recording unit, and
power supply. The GPS receiver decodes the timing signals from the 'visible' satellites (four or
more) and, having calculated their distances, computes its own latitude, longitude, elevation, and
time. This is a continuous process and generally the position is updated on a second-by-second
basis, output to the receiver display device and, if the receiver display device and, if the receiver
provides data capture capabilities, stored by the receiver-logging unit.
GPS Positioning Types
Absolute Positioning
The mode of positioning relies upon a single receiver station. It is also referred to as 'stand-alone'
GPS, because, unlike differential positioning, ranging is carried out strictly between the satellite
and the receiver station, not on a ground-based reference station that assists with the computation
of error corrections
Differential Positioning
Relative or Differential GPS carries the triangulation principles one step further, with a second
receiver at a known reference point. To further facilitate determination of a point's position,
relative to the known earth surface point, this configuration demands collection of an error-
correcting message from the reference receiver. Differential-mode positioning relies upon an
established control point. The reference station is placed on the control point, a triangulated
position, the control point coordinate. This allows for a correction factor to be calculated and
applied to other roving GPS units used in the same area and in the same time series.
GPS Co-ordinates:-
To start off, the receiver picks which C/A codes to listen for by PRN number, based on the
almanac information it has previously acquired. As it detects each satellite's signal, it identifies it
by its distinct C/A code pattern, then measures the time delay for each satellite. To do this, the
receiver produces an identical C/A sequence using the same seed number as the satellite. By
lining up the two sequences, the receiver can measure the delay and calculate the distance to the
satellite, called the pseudorange[12].
Overlapping pseudoranges, represented as curves, are modified
to yield the probable position
Next, the orbital position data, or ephemeris, from the Navigation Message is then downloaded to
calculate the satellite's precise position. A more-sensitive receiver will potentially acquire the
ephemeris data quicker than a less-sensitive receiver, especially in a noisy environment. Knowing
the position and the distance of a satellite indicates that the receiver is located somewhere on the
surface of an imaginary sphere centered on that satellite and whose radius is the distance to it.
Receivers can substitute altitude for one satellite, which the GPS receiver translates to a
pseudorange measured from the center of the earth.
Calculating a position with the P(Y) signal is generally similar in concept, assuming one can
decrypt it. The encryption is essentially a safety mechanism: if a signal can be successfully
decrypted, it is reasonable to assume it is a real signal being sent by a GPS satellite. In
comparison, civil receivers are highly vulnerable to spoofing since correctly formatted C/A
signals can be generated using readily available signal generators. RAIM features do not protect
against spoofing, since RAIM only checks the signals from a navigational perspective.
GPS cordinates can also be found out through individual websites of different companies. One is
required only to put his or her required destination’s address.
GPS Types:-
Handheld GPS: This GPS unit can be used while walking in strange towns, hiking, bicycling,
boating or marking landmarks. These units are also portable.
GPS Fishfinders: GPS technology can be used for fishing purposes whether by a weekend
hobbyist or a tournament angler, in fresh water or on a boat out in salt water. Fishing companies
are also increasingly using GPS for fish tracking.
Laptop GPS: There are several ways to put together a laptop GPS system. For use in an
automobile, there are GPS receivers that are made to connect to a laptop via a cable. This allows
the receiver to be placed near the windshield where it can gather satellite signals. The wired GPS
receivers for a laptop are the most inexpensive way to go.
GPS Watches: They are marketed as speed and distance systems for athletes - they do not
provide location information. The speed and distance systems are composed of two parts: a GPS
receiver and a watch that are wirelessly connected by a radio signal. The GPS receiver can be
worn on the arm or clipped to a belt. However some GPS Watches provide location information.
Bluetooth GPS: Bluetooth GPS is a combination that allows you to have a wireless GPS unit
display on a Bluetooth-enabled device such as a PDA or Pocket PC. Bluetooth GPS receivers
became available in late 2002. They can be used in an automobile or for hiking, among other uses.
Because they are wireless, they are powered by their own batteries.
GPS Palm: Most GPS Palms are smaller and some are less expensive. They also are quick and
simple to use. With Palms, there is a large choice of software programs and a wide range of
accessories.
GPS Cell Phones: The cell phone manufacturers need to incorporate a GPS receiver in a cell
phone. Advantages of this when used in an automobile are: 1) driving directions in your
automobile and; 2) the ability to use a cell phone as a handheld GPS for out-of-car purposes.
Golf GPS: There are two main ways one can have a golf GPS system. One is for the player to
have her or his own unit. The other is for the course to provide the system. From the golf course's
point of view, a GPS system that the course owns can be beneficial in many ways. An integrated
system can allow players to order food and drinks, allow two-way communications, and give
weather alerts. The system can even be a revenue generator by being an advertising medium.
GPS Maps: GPS maps provide point of interest coordinates, map images, route data, and track
data for GPS receivers. GPS map software is made for PDAs, laptops, desktop PCs, and specific
brands of GPS units. Many GPS maps have the capability to upload waypoints, routes, and tracks
to some GPS units.
GPS Tracking: With the growing popularity of GPS, there are many companies offering GPS
tracking systems for a wide variety of uses. Uses of GPS tracking systems are:
• Pets
• Wildlife
• Law enforcement
• Theft prevention
• Vehicle
GPS Vehicle Tracking: GPS vehicle tracking has many uses. Consumers can use these systems
to help recover their vehicle if it is stolen or keep tabs on a teenager in the family car. Commercial
users can improve efficiency and individuals using mass transit will be able to find out if their bus
or train is on time.
GPS PCMCIA: GPS and PCMCIA is a combination that allows laptops, PDAs, and the like to
function as GPS units. PCMCIA (Personal Computer Memory Card International Association)
was formed by several Integrated Circuit card manufacturers in 1989. Its purpose was to adopt an
industry standard for computers to accept peripheral devices such as add-on memory and
modems.
Marine GPS: Marine GPS navigation requires knowledge above and beyond land navigation.
Rocks, shallow water, and wrecks are common obstacles, and since fog often occurs on coastal
waters, it's critical to know where a person is. Recreational boaters usually stick close to land and
this may seem to be a clear advantage, but that is where the majority of hazards are. GPS gives
location, but one needs additional information like charts and a compass.
GPS PDA: A personal digital assistant (PDA) is one of those little hand-held computer gadgets
that people are using for a calendar, notes, calculator, mail and contacts. PDAs can become a
phone, a camera, and also a GPS receiver. Pocket PC GPS is a term that can be used to generally
refer to any personal digital assistant (PDA) that has GPS capability.
GPS Personal Locators: The GPS Personal Locators contains a GPS receiver. The device
transmits the GPS data over a GSM/GPRS (cell phone) system. Depending on the system, the
location data can be accessed on a website or transmitted to a control center, which then contacts
the appropriate people. Many of the systems that allow information access on a website let the
user see the GPS location in real-time on a moving map.
USB GPS Receivers: USB GPS receivers are devices that need to be connected to the USB port
of a laptop computer to function. This type of unit is sometimes called a "mouse GPS" as it
resembles a computer mouse.
42.Ground Truth & Accuracy Assessment
Ground Truth:-
In order to "anchor" the satellite measurements, we need to compare them to something we know.
One way to do this is by what we call "ground truth", which is one part of the calibration process.
This is where a person on the ground (or sometimes in an airplane) makes a measurement of the
same thing the satellite is trying to measure; at the same time the satellite is measuring it. The two
answers are then compared to help evaluate how well the satellite instrument is performing.
Usually we believe the ground truth more than the satellite, because we have more experience
making measurements on the ground and sometimes we can see what we are measuring with the
naked eye.
Ground truth is a term used in cartography, meteorology, analysis of aerial photographs, satellite
imagery and a range of other remote sensing techniques in which data are gathered at a distance.
Ground truth refers to information that is collected "on location". In remote sensing, this is
especially important in order to relate image data to real features and materials on the ground. The
collection of ground-truth data enables calibration of remote-sensing data, and aids in the
interpretation and analysis of what is being sensed.
More specifically, ground truth may refer to a process in which a pixel on a satellite image is
compared to what is there in reality (at the present time) in order to verify the contents of the pixel
on the image. In the case of a classified image, it allows supervised classification to help
determine the accuracy of the classification performed by the remote sensing software and
therefore minimize errors in the classification such as errors of commission and errors of
omission.
• Geophysical parameter data, measured or collected by other means than by the instrument
itself, used as correlative or calibration data for that instrument data. It includes data taken
on the ground or in the atmosphere. Ground truth data are another measurement of the
phenomenon of interest; they are not necessarily more "true" or more accurate than the
instrument data. Source: EPO
• The actual facts of a situation, without errors introduced by sensors or human perception
and judgment. For example, the actual location, orientation, and engine and gun state of
an M1A1 tank in a live simulation at a certain point in time is the ground truth that could
be used to check the same quantities in a corresponding virtual simulation.
• Data collected on the ground to verify mapping from remote sensing data such as air
photos or satellite imagery.
• To verify the correctness of remote sensing information by use of ancillary information
such as field studies.
• In cartography and analysis of aerial photographs and satellite imagery, the ground truth
is the facts that are found when a location is field checked -- that is, when people actually
visit the location on foot.
Ground truth is usually done on site, performing surface observations and measurements of
various properties of the features of the ground resolution cells that are being studied on the
remotely sensed digital image. It also involves taking geographic coordinates of the ground
resolution cell with GPS technology and comparing those with the coordinates of the pixel being
studied provided by the remote sensing software to understand and analyze the location errors and
how it may affect a particular study.
Ground truth is important in the initial supervised classification of an image. When the identity
and location of land cover types are know through a combination of field work, maps, and
personal experience these areas are known as training sites. The spectral characteristics of these
areas are used to train the remote sensing software using decision rules for classifying the rest of
the image. These decision rules such as Maximum Likelihood Classification, Parallelepiped
Classification, and Minimum Distance Classification offer different techniques to classify an
image. Additional ground truth sites allow the remote sensor to establish an error matrix which
validates the accuracy of the classification method used. Different classification methods may
have different percentages of error for a given classification project. It is important that the remote
sensor chooses a classification method that works best with the number of classifications used
while providing the least amount of error.
Pictures showing ground truth of a satellite image with respect to person measuring on ground
Ground truth is important in the initial supervised classification of an image. When the identity
and location of land cover types are know through a combination of field work, maps, and
personal experience these areas are known as training sites. The spectral characteristics of these
areas are used to train the remote sensing software using decision rules for classifying the rest of
the image. These decision rules such as Maximum Likelihood Classification, Parallelepiped
Classification, and Minimum Distance Classification offer different techniques to classify an
image. Additional ground truth sites allow the remote sensor to establish an error matrix which
validates the accuracy of the classification method used. Different classification methods may
have different percentages of error for a given classification project. It is important that the remote
sensor chooses a classification method that works best with the number of classifications used
while providing the least amount of error.
The Global Positioning System has developed into an efficient GIS data collection technology
which allows for users to compile their own data sets directly from the field as part of ‘ground
truthing’. Ground-truth surveys are essential components for the determination of accuracy
assessment for classified satellite imagery.
Ground truth also helps with atmospheric correction. Since images from satellites obviously have
to pass through the atmosphere, they can get distorted because of absorption in the atmosphere.
So ground truth can help fully identify objects in satellite photos.
1. The first is what we call a "field campaign". This is where several scientists and
technicians take lots of equipment and set it up somewhere for a short but intense period
of measurement. We get a lot of information from field campaigns, but they are expensive
and only run for a short time.
2. Another source of ground truth is the on-going work of the National Weather Service.
They have a record of weather conditions stretching back for over 100 years.
Observations are made at regular intervals at offices around the country. These provide a
nice record but are not necessarily taken at the same time a satellite passes over the spot.
As clouds are very changeable, things can change completely in even a few minutes.
3. Another option for ground truth is S' COOL. Students at schools around the world can be
involved by making an observation within a few minutes of the time that a satellite views
their area.
2 Accuracy Assessment:-
INTRODUCTION:
Accuracy assessment is one of the most important considerations in the evaluation of remotely
sensed imagery. Too often, it is not done when imagery is produced. The accuracy of an image is
effected by many variables, including the spatial and spectral resolution of the hyper spectral
sensor, processing statistics used, types of classifications chosen, limits of detection of different
surface materials, suitability of reference spectra used for image analysis training, the type and
amount of ground truth data acquisition, and type of atmospheric correction algorithm applied to
the imagery.
Definition:
Comparison of a classification with ground-truth data to evaluate how well the classification
represents the real world.
Several kinds of errors - mainly those of "commission" or "omission" - are discussed as a basis for
setting up an accuracy assessment program. Accuracy itself is defined and the point is made that
much depends on just how any class, feature, or material being classified is meaningfully set forth
with proper descriptors. Two factors are important in achieving suitable (hopefully, high)
accuracy: spatial resolution (which influences the mixed pixel effect) and number of spectral
bands involved in the classification.
Errors of commission:-
An example of an error of commission is when certain pixels that are one thing, such as trees, are
classified as another thing, such as asphalt. Ground truthing ensures that the error matrices have a
higher accuracy percentage than would be the case if no pixels were ground truthed.
Errors of omission:-
An example of an error of omission is when pixels of a certain thing, for example maple trees, are
not classified as maple trees. The process of ground truthing helps to ensure that the pixel is
classified correctly and the error matrices are more accurate.
Accuracy Assessment:
Assessing the accuracy of a remote sensing output is one of the most important steps in
any classification exercise!!
Without an accuracy assessment the output or results is of little value.
There are a number of issues relevant to the generation and assessment of errors in a
.
classification
These include:
• The nature of the classification;
• Sample design and
• Assessment sample size.
• Nature of Classification:-
1. Class definition problems occur when trying to extract information from a image, such as
tree height, which is unrealistic. If this happens the error rate will increase.
2. A common problem is classifying remotely sensed data is to use inappropriate class
labels, such as cliff, lake or river all of which are landforms and not cover-types.
Similarly a common error is that of using class labels which define land-uses. These
features are commonly made up of several cover classes.
3. The final point here, in terms of the potential for generation of error is the mislabeling of
classes. The most obvious example of this is to label a training site water when in fact it is
something else. This will result in, at best a skewing of your class statistics if your
training site samples are sufficiently large, or at worst shifting the training statistics
entirely if your sites are relatively small.
This will result in, at best a skewing of your class statistics if your training site samples are
sufficiently large, or at worst shifting the training statistics entirely if your sites are relatively
small.
• Sample Design:-
1. In addition to being independent of the original training sample the sample used must be
of a design that will insure consistency and objectivity.
2. A number of sampling techniques can be used. Some of these include random, systematic,
and stratified random.
3. Of the three the systematic sample is the least useful. This approach to sampling may
result in a sample distribution which favors a particular class depending on the
distribution of the classes within the map
4. Only random sample designs can guarantee an unbiased sample.
5. The truly random strategy however may not yield a sample design that covers the entire
map area, and so may be less than ideal.
6. In many instances the stratified random sampling strategy is the most useful tool to use.
In this case the map area is stratified based on either a systematic breakdown followed by
a random sample design in each of the systematic subareas, or alternatively through the
application of a random sample within each of the map classes. The use of this approach
will ensure that one has an adequate cover for the entire map as well as generating a
sufficient number of samples for each of the classes on the map.
Types of Sampling:-
Stratified
Random Systematic Random
• Sample Size:
1. The size of the sample used must be sufficiently large to be statistically representative of
the map area. The number of points considered necessary varies, depending on the
method used to estimate.
2. What this means is that when using a systematic or random sample size, the number of
points are kept to a manageable number. Because the number of points contained within a
stratified area is usually high, that is greater than 10000; the number of samples used to
test the accuracy of the classes through a stratified random sample will be high as well, so
the cost for using a highly accurate sampling strategy is a large number of samples.
3. Once a classification has been sampled a contingency table (also referred to as an error
matrix or confusion matrix) is developed.
4. This table is used to properly analyze the validity of each class as well as the
classification as a whole.
5. In this way we can evaluate in more detail the efficacy of the classification.
Field Data:
Positional accuracy
attributes accuracy
measurement accuracy
Map Boundary
registration
scale
Classification
Correctly identified classes
mis-classification
un-identified classes
Contingency Matrix:-
• One way to assess accuracy is to go out in the field and observe the actual land class
at a sample of locations, and compare to the land classification it was assigned on the
thematic map.
• There are a number of ways to quantitatively express the amount of agreement
between the ground truth classes and the remote sensing classes.
• One way is to construct a confusion error matrix, alternatively called a error matrix
• One way is to construct a confusion error matrix, alternatively called a error matrix
• This is a row by column table, with as many rows as columns.
• Each row of the table is reserved for one of the information, or remote sensing classes
used by the classification algorithm.
• Each column displays the corresponding ground truth classes in an identical order
Contingency Tables:-
For a simple example involving only 3 classes, consider The diagonal elements tally the number
of pixels classified correctly in each class.
Users accuracy:-
1. A user of the imagery who is particularly interested in class A, say, might wish to know
what proportion of pixels assigned to class A were correctly assigned.
2. In this example 35 of the 39 pixels were correctly assigned to class A, and the user
accuracy in this category of 35/39=90%.
In general terms, for a particular category is user accuracy computed as:
Number of correct classifications
Total number of classifications in the category
which, for an error matrix set up with the row and column assignments as stated, is computed as
the user accuracy:-
Number in diagonal cell of error matrix
Number in row total
Evidently, user accuracy can be computed for each row.
Producers accuracy:-
1. Contrasted to user accuracy is producer accuracy, which has a slightly different
interpretation.
2. Producers accuracy is a measure of how much of the land in each category was classified
correctly.
It is found, for each class or category, as
Number in diagonal cell of error matrix
Number in row total
Accuracy assessment:-
So from this assessment we have three measures of accuracy which address subtly different
issues:
1. Overall accuracy: takes no account of source of error (errors of omission or commission)
2. User accuracy: measures the proportion of each TM class which is correct.
3. Producer accuracy: measures the proportion of the land base which is correctly classified.
Kappa coefficient:-
Another measure of map accuracy is the kappa coefficient, which is a measure of the proportional
(or percentage) improvement by the classifier over a purely random assignment to classes.
r r
r: # of rows, columns in error matrix
N xii − ( xir xic )
N: total # of observations in error matrix
Kˆ = i =1
r
i =1
xii: major diagonal element for class i
N − 2
( xir xic ) xir: total # of observations in row i
i =1 xic: total # of observations in column i
K-hat provides a basis for determining the statistical significance of any given
classification matrix
Quality of accuracy estimate depends on the quality of the info used as ground truth
(which has its own accuracy estimate)
For an error matrix with r rows, and hence the same number of columns,
Let A = the sum of r diagonal elements, which is the numerator in the computation of overall
accuracy.
Let B = sum of the r products (row total x column total).
Then
where N is the number of pixels in the error matrix (the sum of all r individual cell values).
The error matrix, producer’s and user’s accuracy and KHAT value have become standard in
assessment of classification accuracy. However, if the error matrix is improperly generated by
poor reference data collection methods, then the assessment can be misleading. Therefore
sampling methods used for reference data should be reported in detail so that potential users can
judge whether there may be significant biases in the classification accuracy assessment.
43 Map projections, Concept, Classification, Use, Type,
Polyconic, Mercator, UTM
Because the real earth's shape is irregular, information is lost in the first step, in which an
approximating, regular model is chosen. Reducing the scale may be considered to be part of
transforming geographic coordinates to plane coordinates.
Most map projections, both practically and theoretically, are not "projections" in any physical
sense. Rather, they depend on mathematical formulae that have no direct physical interpretation.
However, in understanding the concept of a map projection it is helpful to think of a globe with a
light source placed at some definite point with respect to it, projecting features of the globe onto a
surface. The following discussion of developable surfaces is based on that concept.
A surface that can be unfolded or unrolled into a flat plane or sheet without stretching, tearing or
shrinking is called a 'developable surface'. The cylinder, cone and of course the plane are all
developable surfaces. The sphere and ellipsoid are not developable surfaces. Any projection that
attempts to project a sphere (or an ellipsoid) on a flat sheet will have to distort the image (similar
to the impossibility of making a flat sheet from an orange peel).
One way of describing a projection is to project first from the earth's surface to a developable
surface such as a cylinder or cone, followed by the simple second step of unrolling the surface
into a plane. While the first step inevitably distorts some properties of the globe, the developable
surface can then be unfolded without further distortion.
Classification
A fundamental projection classification is based on type of projection surface onto which the
globe is conceptually projected. The projections are described in terms of placing a gigantic
surface in contact with the earth, followed by an implied scaling operation. These surfaces are
cylindrical (e.g., Mercator), conic (e.g., Albers), and azimuthal or plane (e.g., stereographic).
Many mathematical projections, however, do not neatly fit into any of these three conceptual
projection methods. Hence other peer categories have been described in the literature, such as
pseudoconic (meridians are arcs of circles), pseudocylindrical (meridians are straight lines),
pseudoazimuthal, retroazimuthal, and polyconic.
Another way to classify projections is through the properties they retain. Some of the more
common categories are:
• Direction preserving, called azimuthal (but only possible from the central point)
• Locally shape-preserving, called conformal or orthomorphic
• Area-preserving, called equal-area or equiareal or equivalent or authalic
• Distance preserving - equidistant (preserving distances between one or two points and
every other point)
• Shortest-route preserving - gnomonic projection
The mapping of meridians to vertical lines can be visualized by imagining a cylinder (of which
the axis coincides with the Earth's axis of rotation) wrapped around the Earth and then projecting
onto the cylinder, and subsequently unfolding the cylinder.
Unavoidably, all cylindrical projections have the same east-west stretching away from the equator
by a factor equal to the secant of the latitude, compared with the scale at the equator. The various
cylindrical projections can be described in terms of the north-south stretching:
North-south stretching is equal to the east-west stretching (secant(L)): The east-west scale
matches the north-south-scale: conformal cylindrical or Mercator; this distorts areas excessively
in high latitudes (see also transverse Mercator).
North-south stretching growing rapidly with latitude, even faster than east-west stretching
(secant(L))2: The cylindric perspective (= central cylindrical) projection; unsuitable because
distortion is even worse than in the Mercator projection.
North-south stretching grows with latitude, but less quickly than the east-west stretching: such as
the Miller cylindrical projection (secant(L*4/5)).
North-south distances neither stretched nor compressed (1): equidistant cylindrical or plate carrée.
North-south compression precisely the reciprocal of east-west stretching (cos(L)): equal-area
cylindrical (with many named specializations such as Gall-Peters or Gall orthographic,
Behrmann, and Lambert cylindrical equal-area). This divides north-south distances by a factor
equal to the secant of the latitude, preserving area but heavily distorting shapes.
In the first case (Mercator), the east-west scale always equals the north-south scale. In the second
case (central cylindrical), the north-south scale exceeds the east-west scale everywhere away from
the equator. Each remaining case has a pair of identical latitudes of opposite sign (or else the
equator) at which the east-west scale matches the north-south-scale.
Cylindrical projections map the whole Earth as a finite rectangle, except in the first two cases,
where the rectangle stretches infinitely tall while retaining constant width.
Conic Projections
Conical projections are accomplished by intersecting, or touching, a cone with the global surface
and mathematically projecting lines onto this developable surface. A tangent cone intersects the
global surface to form a circle. This is conceptually equivalent to the touching of a sweatband of a
hat on a head. On this line of intersection, termed the standard parallel, the map will be relatively
error-free and possess equidistance. Cones may also be secant, and intersect the global surface
forming two circles which will possess equidistance. Note that use of the word "secant", in this
instance, is only conceptual, not geometrically accurate. As with planar projections, the conical
aspect may be polar, equatorial, or oblique.
The polyconic projection was used for most of the earlier USGS topographic quadrangles. The
projection is based on an infinite number of cones tangent to an infinite number of parallels. The
central meridian is straight. Other meridians are complex curves. The parallels are non-concentric
circles. Scale is true along each parallel and along the central meridian.
• Azimuthal Equidistant
o Azimuthal equidistant projections are sometimes used to show air-route
distances. Distances measured from the center are true. Distortion of other
properties increases away from the center point.
• Lambert Azimuthal Equal Area
o The Lambert azimuthal equal-area projection is sometimes used to map large
ocean areas. The central meridian is a straight line, others are curved. A straight
line drawn through the center point is on a great circle.
• Orthographic
o Orthographic projections are used for perspective views of hemispheres. Area and
shape are distorted. Distances are true along the equator and other parallels.
• Stereographic
o Stereographic projections are used for navigation in polar regions. Directions are
true from the center point and scale increases away from the center point as does
distortion in area and shape.
Polyconic
This projection was developed in 1820 by Ferdinand Hassler specifically for mapping the eastern
coast of the U.S. Polyconic projections are made up of an infinite number of conic projections
tangent to an infinite number of parallels. These conic projections are placed in relation to a
central meridian.
Polyconic projections compromise properties such as equal area and conformality, although the
central meridian is held true to scale. All parallels are arcs of circles, but not concentric. All
meridians, except the central meridian, are concave toward the central meridian. Parallels cross
the central meridian at equal intervals but move farther apart at the east and west peripheries.
Once again, values of false easting and northing are usually included so that NO negative values
occur in the rectangular coordinate system representing the desired region of the map projection.
Mercator
This famous cylindrical projection was originally designed by Flemish map maker Gerhardus
Mercator in 1569 to aid navigation. Meridians and parallels are straight lines which cross at right
angles. Angular relationships are preserved.
To preserve conformality, parallels are placed increasingly farther apart with increasing distance
from the equator. Due to extreme scale distortion in high latitudes, the projection is rarely
extended beyond 80 degrees North or South.
Rhumb lines, which show constant direction, are straight but do NOT represent the shortest path;
great circles are the shortest path.
Again, values of false easting and northing are usually included so that NO negative values occur
in the rectangular coordinate system representing the desired region of the map projection.
Universal Transverse Mercator (UTM) - a global system developed by the US Military
Services
This is an international plane (rectangular) coordinate system developed by the U.S. Army which
extends around the world from 84 degrees North to 80 degrees South. The world is divided into
60 zones, each covering six (6) degrees of longitude. Each zone extends three degrees eastward
and three degrees westward from its central meridian. Zones are numbered consecutively west to
east from the 180 degree meridian. From 84 degrees North and 80 degrees South to the respective
poles, the Universal Polar Stereographic (UPS) is used.
The Transverse Mercator projection is applied to each UTM zone. Transverse Mercator is a
transverse form of the Mercator cylindrical projection. The projection cylinder is rotated 90
degrees from the vertical (polar) axis and can be placed to intersect at a chosen central meridian.
The UTM system specifies the central meridian of each zone. With a separate projection for each
UTM zone, a high degree of accuracy is possible (maximum distortion of one part in 1,000 within
each zone).
44 Map Scale: Type and conversion, vertical exaggeration
The scale of a photograph expresses the mathematical relationship between a distance measured
on the photo and the corresponding horizontal distance measured in a ground coordinate system.
Unlike maps, which have a single constant scale, aerial photographs have a range of scales that
vary in proportion to elevation of the terrain involved. Once the scale of the photograph is known
at any particular elevation, ground distances at that elevation can be readily estimated from
corresponding photo distance measurements.
Photographic scale:-
One of the most fundamental and frequently used geometric characteristics of aerial photographs
is that of photographic scale. A photograph “ scale”, like a map scale is an expression that states
that one unit (any unit) of distance on a photograph represents a specific number of units of actual
ground distance. Scales may be expressed as unit equivalents, representative fractions or ratios. A
scale can vary from large to small on the basis of area covered. Same objects are smaller on a
smaller scale photograph than on a larger scale photograph. The most straight forward method
for determining photo scale is to measure the corresponding photo and ground distances between
any two points. This requires that the points be mutually identifiable on both the photo and a map.
The scale S is then computed as the ratio of the photo distance ‘d’ to the ground distance ‘D’
Vertical photograph:-
For a vertical photograph taken over flat terrain, a scale is a function of the focal length ‘f’ of the
camera used to acquire the image and the flying height above the ground H’ from which the
image was taken. In general,
The above mentioned equation is for a flat land which is practically rare.
So in order to calculate the scale for a general sloppy or rough terrain we have to reformulate the
above equation.
Exposure station L is at an air craft flying height H above some datum, or arbitrary base
elevation. The datum must frequently used is mean sea level if flying height H and the elevation
of the terrain h are known, we can determine H’ by subtracting H from h i.e
H’ = H - h .
223
If we consider terrain points A, O and B they are imaged at points a’, o’ and b’ on a negative film
and at a o and b on the positive print. We can derive an expression for photo scale by observing
similar triangles Lao and LAO, which are corresponding photo and ground distances i.e.
S= / = f / H’
S = f / H-h
S (avg) = f / H – h (avg)
Where, h (avg) is the average elevation of the terrain shown in photograph.
224
Because of the nature of this projection, any variation in terrain elevation will result in scale
variation and displaced image positions.
Stereoscopy:-
It is the method to visualize an aerial photograph with the help of stereoscope. A stereoscopic
vision helps to obtain a 3 dimensional image of an aerial photograph.
Vertical Exaggeration :-
When we visualize the aerial photograph with the help of stereoscope, the image that we get is
influenced by several technical factors due to which the image perception varies. So in order to
rectify this error we calculate the vertical exaggeration.
Vertical exaggeration, VE = (B / H) x (h / b)
Where,
B = Air base
H = Flying height
b = Eye base
h = depth at which stereo model is perceived.
225
45. GIS- Definitions, Components, Objectives and hardware
& Software Requirement
Geographic Information system (GIS):-
A Geographic Information System (GIS) is a system for capturing, storing, analyzing and
managing data and associated attributes which are spatially referenced to the earth. It is a
computer system capable of integrating, storing, editing, analyzing, sharing, and displaying
geographically-referenced information. GIS is a tool that allows users to create interactive
queries (user created searches), analyze the spatial information, edit data, maps, and present the
results of all these operations.
It integrates common database operations such as query and statistical analysis with the unique
visualization and geographic analysis benefits offered by maps. These abilities distinguish GIS
from other information systems and make it valuable to a wide range of public and private
enterprises for explaining events, predicting outcomes, and planning strategies. (ESRI).
A typical GIS can be understood by the help of various definitions given below:-
A GIS is most often associated with maps. A map, however, is only one way you can work with
geographic data in a GIS, and only one type of product generated by a GIS. This is important,
because it means that a GIS can provide a great deal more problem-solving capabilities than using
a simple mapping program or adding data to an online mapping tool (creating a "mash-up").
1. The Database View: A GIS is a unique kind of database of the world—a geographic
database (geodatabase). It is an "Information System for Geography." Fundamentally, a
GIS is based on a structured database that describes the world in geographic terms.
2. The Map View: A GIS is a set of intelligent maps and other views that show features and
feature relationships on the earth's surface. Maps of the underlying geographic
information can be constructed and used as "windows into the database" to support
queries, analysis, and editing of the information. This is called geo visualization.
3. The Model View: A GIS is a set of information transformation tools that derive new
geographic datasets from existing datasets. These geo processing functions take
information from existing datasets, apply analytic functions, and write results into new
derived datasets.
In other words, by combining data and applying some analytic rules, one can create a
model that helps answer the question for analysis.
Components of GIS
GIS is a real application, including the hardware, data, software and people needed to solve a
problem.
GIS hardware: It is like any other computer, keyboard, display monitor (screen), cables, Internet
connection with some extra components perhaps maps come on big bits of paper
- need specially big printers and plotters to make map output from GIS
- need specially big devices to scan and input data from maps to GIS
- digitizers, scanners
But not all GIS will need these, what is important is the kind of information that is stored,
information about what is where, the contents of maps and images. A GIS includes the tools to do
things with information like the special functions that work on geographic information that
functions to display on the screen, edit, change, transform, and measure distances. Keeping the
combine maps of the area together is simple, but functions can be much more sophisticated like
- keep inventories of what is where
- manage properties, facilities
- judge the suitability of areas for different purposes
- help users make decisions about places, to plan
- make predictions about the future etc.
All these sophisticated functions require human expertise as well for the interpretation and
management of data.
GIS Software:
The functions that a GIS can perform are part of its software .This software will probably have
been supplied by a company that specializes in GIS. The price of the software may be anywhere
from $50 to $50,000.
Open Based The source code is freely GeoTools The other companies
Software available and is licensed so GeoTools is an open source, Java GIS are Fmaps, EDBS
that it can be freely toolkit for developing standards compliant Reader, GMT,
distributed and modified as solutions.. GeoTools aims to support Open Intergraph WMS
long as appropriate credit is GIS and other relevant standards as they are Viewer
provided to the developers. developed.
Server based Server GIS is used for many GIServer “The GIServer is an initiative The other companies
Software kinds of centrally hosted GIS from the inova GIS project that gives free are MapServer etc.
computing. access to GIS functions through the Internet.
Desktop Software Licensed Software and source ESRI is software company Available The other companies
code is not freely available software includes ArcGIS, ArcSDE, are EPPL7, Ilwis,
ArcIMS, and ArcWeb services. Known best Intergraph, Manifold
for the ESRI shapefiles file format, which is etc.
often used to supply or transfer GIS data.
There are many types of paid GIS Software and many types of Freeware GIS Software available
in market for different purpose which are given below:
Paid GIS Software:
1) AGIS
2) AUTODESK
• Autodesk has a series of software applications designed to meet GIS needs in a variety of
areas.
• Autodesk Map- delivers specialized functionality for creating, maintaining, and producing
maps and geographic data.
• Built on AutoCAD® 2000i, AutoCAD Map 2000i adds new Internet tools to keep you in
touch with your colleagues, customers, and data.
• Autodesk Mapguide- get live, interactive access to your designs, maps, and data from the
Internet, your intranet, or in the field.
• Autodesk MapGuide® Release 5 software makes it all possible.
Platforms: UNIX, PC,Macintosh, WinCE, and Palm devices.
3) DeLorme
• DeLorme is the producers of XMap, a GIS application "with 80% of the functionality
found in a traditional GIS at 15% of the cost".
• Performs functions such as geocoding, image rectification, 3D visualization and
coordinate transformation.
• XMap 4.5 is powerful and scalable mapping software that provides users with easy-to-use
and affordable digital mapping tools.
• Add-on software modules expand capabilities further encompassing image registration
and aerial photography mission planning.
• A wide variety of DeLorme data and imagery sets are available that work seamlessly with
XMap 4.5.
• The platform’s data structure enables XMap to support OpenGIS® and interoperability
between most data formats.
• Affordable and feature packed, XMap 4.5 provides users with import tools, data
management flexibility, split-screen viewing, advanced draw and print capabilities, and a
variety of different DeLorme datasets from which to choose.
• Data that is analyzed within the XMap/GIS Editor package can be viewed within XMap
4.5.
• XMap 4.5 is a flexible, comprehensive tool designed to meet the spatial data needs of
professionals within a variety of industries.
1. Utilities
2. Civil Engineering
3. Public Safety
4. Government
5. Land Management
6. Transportation
7. Real Estate
4) EPPL7
viewshed
Landscape analysis terrain visualization
slope analysis
aspect analysis
averaging
evaluating
Neighborhood operations clustering or grouping cells
distance mapping
buffers
reclassification
Overlay functions logical evaluation of overlapping themes
mosaics
vector and raster data
Data conversion interpolating point and line data
tabular data to raster files
file format conversion
file transformation
Utility and file management
rescaling and resizing files
using windows
5) ESRI
• Environmental Systems Research Incorporated has been creating GIS software for over
30 years.
• Recognized as the leader in GIS software, it's been estimated that about seventy percent
of GIS users use ESRI products. ESRI overhauled their software packages into an
interoperable model called ArcGIS.
• The three main GIS software packages available from ESRI are: ArcInfo/ArcView 8.x,
ArcView 3.x and ArcIMS. editing and data manipulation capabilities
• ArcInfo was the first software product available from ESRI and is also the most
comprehensive analytical and mapping software offered by ESRI.
• ArcView 3.x is the original desktop solution offered by ESRI as an out-of-the box
desktop mapping software product for the end user.
• More user friendly than ArcInfo, ArcView's editing and data manipulation capabilities are
extended with each update.
• In addition, ESRI has developed plug-ins called extensions which add to the functionality
of ArcView. ArcIMS is a relatively young product from ESRI designed to create out-of-
the-box web mapping but also allowing developers to create more involved, custom
browser-based mapping applications.
• A Visual Basic component, Map Objects allows programmers to build cartographic
applications from the ground up.
• Platforms: UNIX, Windows OS
6) Geo/SQL
7) IDRISI
• Kilimanjaro is a sophisticated GIS and Image Processing software solution that includes
over 200 modules for the analysis and display of digital spatial information.
• IDRISI is the industry leader in raster analytical functionality covering the full spectrum
of GIS and Remote Sensing needs from database query, to spatial modeling, to image
enhancement and classification.
• IDRISI Kilimanjaro uses the latest object-oriented development tools, bringing true
research power to the NT workstation (NT) and desktop.
• TIN interpolation, Kriging and conditional simulation are also offered.
• Spatial Analysis Remote Sensing• Natural Resource and Ecology and Conservation
Environmental Management Land Use Planning
• Special facilities are included for environmental modeling and natural resource
management, including change and time series analysis, land change prediction, multi-
criteria and multi-objective decision support, uncertainty analysis and simulation
modeling.
8) ILWIS
• Ilwis is a GIS and Remote Sensing package offering orthorectification, geostatistics and
overlay capabilities.
• ILWIS integrates image, vector and thematic data in one unique and powerful package on
the desktop.
• ILWIS delivers a wide range of feautures including import/export, digitizing, editing,
analysis and display of data as well as production of quality maps.
• The main features of Ilwis are:
• Integrated raster and vector design.
• Import and export of widely used data formats On-screen and tablet digitizing.
• Comprehensive set of image processing tools Orthophoto, image georeferencing,
transformation and mosaicing
• Advanced modeling and spatial data analysis
• 3D visualization with interactive editing for optimal view findings Rich projection and
coordinate system library Geo-statisitical analyses, with Kriging for improved
interpolation
GIS Reader
Books:
234
GIS Reader
Lillesand, Kiefer & Chipman. REMOTE SENSING & IMAGE INTERPRETATION. Wiley
publication.
Sabins Jr., F. F. 1987. REMOTE SENSING; PRINCIPLES AND INTERPRETATION. New
York: W. H. Freeman
M.Anji Reddy. TEXTBOOK OF REMOTE SENSING AND GEOGRAPHICAL
INFORMATION SYSTEMS. B.S.Publications
J.Ronald Eastman. GUIDE TO GIS AND IMAGE PROCESSING. IDRISI Production
David L.Verbyla, SATELLITE REMOTE SENSING OF NATURAL RESOURCES, 2005.
Michael Lefsky, PRESENTATION ON ACCURACY ASSESSMENT, 2006.
Globe, TUTORIAL ON ACCURACY ASSESSMENT, 2005.
Websites:
235