Sunteți pe pagina 1din 233

GIS Reader

ACKNOWLEDGEMENT

We gratefully acknowledge the support of Prof. Anjana Vyas, for giving us the opportunity of
presenting the papers. Thanks also to the anonymous referees who provided very useful
comments on earlier drafts. Responsibility for contents, of course, rests with the readers.

-Readers

1
GIS Reader

CONTENTS OF THE BOOK

S.I NO TOPICS NAME OF READERS


1 Historical Perspective Of Remote sensing, Adnan Diwan
Development of Remote sensing in India Swapna Deshmukh
2 Electromagnetic Radiation & EMR Spectrum, Paulose NK
Theories of EMR Ashok Chaudari
3 Interaction of EMR With The Earth Surface & Pranjali Deshpande
Atmosphere Utkarsha kavadi
4 Atmospheric Windows ,Physical Basis of Rishabh Pandey
Signature; Vegetation Soil ,Water Bodies/Ocean YungadarajRedkar
5 Sensors: Push broom Cameras, Hyper spectral Shaikh Faiz
Imagers, Abha sharma
6 Sensors: Optomechanical Scanners Operated Priyanka pawar
From Satellites, Quality of Image in Optical Rajesh Asati
Systems
7 Platforms :Ground-Based, Air-Borne,SpaceBorne Shweta Gupta
8 Orbits: Geostationary Satellites and Polar- Arindam Majumdar
Orbiting Satellites Mousumi chakraborty
9 History And Development of Aerial Photography Deepty Jain
10 Types, Geometry and Scale of Aerial Photography Nitu Sakhare
11 Aerial Photogrammetry: Image Parallax, Tanaya
Parallax Measurement and Relief Displacement Jaladhi
12 Stereoscopes and Other Instruments Used For Lekshmi D
Aerial Photogrammetry
13 Digital images: sources of errors: Radiometric Prabhjit singh Dhillon
and geometric Gaurav Vaidya
14 Image Rectification : Radiometric Correction, Mary Hans George
Geometric Correction ,Noise Removal
15 Image Enhancement Techniques : Contrast Deshpande Shrinivas
Enhancement; Linear & Non-Linear, Logarithmic Eleza Boban
Enhancement, Exponential Contrast Enhancement,
Gaussian Stretch, Density Slicing,
16 Image Enhancement Techniques: Spatial Ambily. P
Filtering; High and Low Frequency, Edge Sireesha
Enhancement, Band Rationing.
17 Image Classification: Supervised Classification Jay padalia
:Training Sites Selection & Statistical Information Sangeetha Raghuram
Extraction,
18 Image Classification: Discriminant Functions: Divya verma
Maximum Likelihood Classifier, Euclidian Urvi Pandya
Distance, Mahalanobis Distance.
19 Image Classification: Unsupervised Parthi soni
Classification, Classification Accuracy Shivnath Patil
Assessment, Error Matrix.
20 Visual Image Analysis: Elements of Image Abhijit Sinha
Interpretation. Reference System of IRS Satellite. Soumita Gupta
21 Spatial Analysis : Significance of Spatial Bhavsar Dhruv D.
Analysis, Overview of Tools For Analysis Mallika Jain

2
GIS Reader

22 Surface Analysis : Interpolation Methods, Poornima singh


Prashant sanga
23 Surface Analysis : dem, tin, slope, aspect, relief
and hill shading

24 Geographic Data: Types of Data , Levels of Namgyal Dophu


Measurement
25 Spatial Data: Concept of Space and Time, Layers Shriya Bahtra
and Coverages, Spatial Data Models. Cheryl Bicknell
26 Spatial Data: Representation of Geographic Pankaj Sampat
Features in Vector and Raster Models, Point, Line, Prashant
Polygon, Grid.
27 Spatial Data: Concept of Arc, Nodes, Vertices Pooja Sanghani
and Topology.
28 Spatial Data: Computer Representation for Vidhee Avashia
Storing Spatial Data, Block Code, Run Length Sneha Malakesirju
Encoding, Chain Code, Quad tree
29 Non-Spatial Data: RDBMS, Concepts, Anuja singh
Components, Database Scheme, Relationship-One Purva tavri
To One, One To Many Etc.
30 Non-Spatial Data: SQL, Query Processing, sakshi sahni
Operations suryakant verma
31 Spatial Data Input :Digitization, Error Neha Jharia
Identification, Types And Sources of Errors, Nisha Poddar
Correction, Editing ,Topology Building
32 Automating the overlay process Meena Gajjar
Deepa Gupta
33 Raster Based Analysis : Map Algebra, Grid Kevisino Angami
Based Operations, Local, Focal ,Zonal & Global Gayatri Sahoo
Functions,
34. Vector Based Analysis : Multilayer Operations : Anuradha Naulakha
Union, Intersection, Clip Tarun patel
35 Spatial representation of Geographical features in Pavan kumar.A
Raster and vector models Rthnakar N.C
36 Network Analysis: Concepts, Evaluation of Nidhi shah
Network Complexity Using Alpha , Gamma Halak Bhatt
Indices.
37 Network Analysis:c-matrices for evaluating Manisha
connectivity of the network Niruti
38 Network Analysis : Network Data Model S.Indupriya
M.Vijaya Kumari
39 Methods For Evaluating Point Patterns:
Clustered and Random Distribution.
40 Ground Control Points and Flight Planning Surbhi Gupta
41 GPS:Concept ,Type, Mode of Coordinate Namrata Dutta
Collection
42 Ground Truth And Accuracy Assessment Komali Rani.y
43 Map Projection: Concept Classification, Use Manish shirsath
,Type, Polyconic, Mercator, UTM, Etc Heena vora
44 Map Scale: Types And Conversion, Vertical Arvind Rai.
Exaggeration. Gauri Deshpande
45. GIS : Definitions, Evolution ,Components, Lavinder walia

3
GIS Reader

4
GIS Reader

1 Historical Perspective of Remote Sensing, Development of


Remote Sensing in India
History
Remote sensing is defined as the science and art that permits us to obtain information about an
object or phenomenon through the analysis of data acquired by a sensing device without its being
in contact with that object or phenomenon. Hence it refers to collection of data by instruments
which are commonly used to survey, map, and monitor the resources and environment of Earth.
They also have been used to explore other planets
The history of remote sensing can be summarized and confined to some of the significant
developments in aerial photography and space imaging.

Aerial Photography:-
Although the first, rather primitive photographs were taken as "stills" on the ground, the idea of
photographing the Earth's surface from above, yielding the so-called aerial photo emerged in the
1860s with pictures from balloons. From then until the early 1960s, the aerial photograph
remained the single standard tool for depicting the surface from a vertical or oblique perspective.
The aerial photography refers to photographs taken from air-borne platforms. It can be classified
under the following heads:

• Development of aerial sensors


• Development of aerial platforms
• Development of menstruation techniques

Development of aerial sensors:


Aerial cameras carrying photographic films are most widely used aerial sensors. In early days,
pinhole cameras were used to take the pictures of objects. In course of time these cameras were
replaced by simple lens cameras. Latest in the development of photographic cameras is the large
format camera placed on board space shuttle.
The discovery of sodium thiosulphate (Hypo) was an important development in the field of film
processing. The other significant development was role film by Eastmann in 1885 and
panchromatic black and white, infrared black and white, infrared false color and color films which
are now extensively used for remote sensing application. High resolution films area now a day
mostly used.

Development of Aerial Platforms:


The first aerial photograph was taken by Gaspard Felix Tournachon (Later known as Nadar) in
1858 with the help of captive balloon in France. Later kites and balloon kites were also used to
take photographs. After the invention of airplane in 1903 by Wright brothers the aerial
photography was done using aero planes in 1909. Since then the improvements are constantly
done in air-borne platforms. In India four types of air crafts are mainly being used viz. Dakota,
Avro, Cessna and Canberra. In other countries U-2, RC-135, SR-71, Rock well X-15, etc are
used.
All the air crafts carries variety of sensors fitted with modern navigational equipments.

Development of Mensuration Techniques:


The early instruments for compilation of maps from photographs consisted of laborious graphic
constructions. During 1910 to 1930 more sophisticated instruments, like stereographs,
Aerocartographs, Stereoplainigraphs, etc were developed. Orthophotoscope and orthophotographs
were developed in 1950s. A number of analytical plotters have since been developed which utilize
on line computer to perform a number of operations.

5
GIS Reader

Space Imaging:-
The space imaging can be classified under following heads

• Space platform
• Sensors
• Interpretation equipments

Space Platforms:
The platform used for space imaging is a space craft. Space remote sensing started in right earnest
during the period 1950, however, the launching of sputnik-I spacecraft by Russia in 1957 started a
new era in remote sensing.
The systematic observation and imaging of earth surface from orbiting satellites started in 1960s.
The launch of first Earth Resources Technology Satellite ERTS-1 (later known as Landsat-1) in
July of 1972 was undoubtly the greatest advancement in earth orbital photography. The first
American space workshop took over 35,000 images of earth with six camera multispectral array, a
long focal length earth terrain camera, a thirteen channel multispectral scanner and two
microwave systems. After that between Landsat-2 on January 22, 1975 and Landsat-5 on March
1, 1984 introduced a new generation of earth resources satellites with improved spatial resolution,
radiometric sensitivity and faster data supply rate.
Indian Remote sensing Satellite (IRS-1) was launched on 1986 which was equipped with
multiband equipments. This is the first launch of India and later launches are discussed below.

Sensors:
Landsat series of satellites carried mainly three sensor systems, viz multi spectral scanner (MSS),
return beam vidicon (RBV) camera and thematic mapper. The basic sensor system in this case
was a linear array of charged couple devices (CCD). Similar sensors would also be carried by
SPOT and Indian Remote Sensing Satellite.

Interpretation equipments:
The simple equipments for visual interpretation of satellite imageries include mirror stereoscope,
magnifying glass and light table.

Development in India:
We, India are seventh nation to achieve orbital capability in July 1980, India is pressing ahead
with an impressive national programmed aimed at developing launchers as well as nationally
produced communications, meteorological and Earth resources satellites. Prof U.R. Rao,
Chairman of the ISRO in October 1984, said that space technology had given India the
opportunity to convert backwardness into an asset; developing countries could bypass the
intermediate technology stage and leapfrog into the high technology area. Like France, India has
benefited from simultaneous co-operation with the CIS/USSR, the US and ESA.

India's launchers:
Indian Space Research Organization (ISRO) carried out its first successful SLV-3 launch on 18
July 1980, thus adding India to the list of space-faring nations. The current generation of
launchers, by means of the PSLV (Polar Satellite Launch Vehicle), fully successful on its second
attempt in October 1994, provides a capability of placing a 1 ton class IRS satellite into a Sun-
synchronous orbit, and is now offered commercially through the Antrix Corporation. An upgrade
into the Geostationary SLV (GSLV) is underway to satisfy a 2.5 ton class launch capability into a
geostationary orbit by 1998-99.
Sl Date of Launch Remarks
Satellite Status
No Launch Vehicle
Mission It carried two sensors, the
17 March Vostok,
1 IRS 1A Completed LISS-1 (Linear Imaging Self-
1988 USSR
(Retired from Scanning System, 72.5-meter

6
GIS Reader

service on 17 resolution) and the LISS-2A&B


March 1995) (36.25 m). During its service it
collected about 300,000 images.
29 August Vostok, Mission
2 IRS 1B
1991 USSR Completed
20 Crashed, due to Was considered as experimental
3 IRS P1 (also IE) Septembe PSLV-D1 launch failure satellites
r 1993 of PSLV
15 It carried the Multispectral
Mission
4 IRS P2 October PSLV-D2 Optoelectronic Scanner
Completed
1994
28
Molniya, Mission
5 IRS 1C December
Russia Completed
1995
Major appl: ocean chlorophyll,
vegetation assessment, snow
21 March Mission
6 IRS P3 PSLV-D3 studies and geological mapping
1996 Completed
for identifying prospective
mineral sites.
Areas of urban sprawl,
29 Sept infrastructure planning and
7 IRS 1D PSLV-C1 In service
1997 other large-scale thematic
mapping.
Identification of potential
fishery, Delineation of coastal
currents and eddies, Estimation
of optical properties and
IRS P4 27 May phytoplankton abundance for
8 PSLV-C2 In service
(Oceansat-1) 1999 marine resource and habitat
assessment, Observation of
pollution and sediment inputs to
the coastal zone and their
impact on marine food
Technology 22
9 Experiment October PSLV-C3 In service
Satellite (TES) 2001
17 Used in Remote sensing
IRS P6
10 October PSLV-C5 In service purposes.
(Resourcesat 1)
2003
IRS P5 5 May Cartographic applications
11 PSLV-C6 In service
(Cartosat 1) 2005
10 Cartographic applications
Cartosat 2
12 January PSLV-C7 In service
(IRS P7)
2007

Conclusion:
With the satellites designed and built by India in the INSAT and IRS series, the country has
started to reap the benefits of space technology for developmental applications, specifically in the
areas of communication, broadcasting, meteorology, disaster management and the survey and
management of resources. The planned launches of more powerful satellites will further enhance
and extend the benefits of space technology. The successful launch of PSLV and the progress
made in the development of GSLV give confidence in the capability of India to launch the IRS
and INSAT class of satellites from its own soil. Thus, India today has a well-integrated and self-
supporting space programme which is providing important services to society.

7
GIS Reader

Indian Remote sensing Satellite (IRS) launch log:


The initial versions are composed of the 1 (A, B, C, D). The later versions are named
based on their area of application including Ocean Sat, Carto Sat, and Resource Sat.
Some of the satellites have alternate designations based on the launch number and vehicle
(P series for PSLV).

8
GIS Reader

2 Interaction of electromagnetic radiation with the earth


surface and atmosphere
History
James Clerk Maxwell (1831 – 1879) in the 1860 first conceptualized electromagnetic radiation
(EMR) as an electromagnetic wave that travels through space at the constant speed of light (3 X
108 m/sec). The German scientist Max Plank suggested in 1900 that hot object, or body (above
absolute zero) such as star, must radiate energy (light, X-rays etc.) in certain packets that he
called quanta. Electromagnetic radiation is generated whenever an electrically charged particle
or body or object accelerated. During the propagation of energy (quanta) as wave, it exhibit two
kinds of fluctuating vector field – 1) electric λvector & 2) magnetic vector. The two vectors are at
right angles (orthogonal) to each other and both are perpendicular to the direction of
propagation.

Some definition:
Wave length (λ) = Mean distance between successive maximum or minimum wave peaks. The
most common unit used to measure wavelength is the micrometer (μm).
Frequency (ν) = Number of wavelengths that pass a fixed point per unit time. It’s most frequently
used unit is hertz (Hz.)
The relationship between wave length & frequency of electromagnetic radiation may be expressed
by following formula:
C = λν --------------------------------- (1)
C = Speed of light (3 X 108 m/sec.)
From the above formula it is noted that, frequency is inversely proportional to wavelength,
the higher the frequency, the shorter the wavelength & vise-versa.

Principles of EMR:
EMR occurs as a continuum of wave lengths & frequencies from short wavelength, high
frequency to long wavelength to low frequency. This is known as the electromagnetic spectrum.
Visible portion of the electromagnetic spectrum for human eyes ranges from wavelength of about
0.4 μm. to 0.7 μm. Wavelength of the color blue is ascribed to the approximate range of 0.4 to
0.5 μm. The color green is ascribed to the wavelength ranges from 0.5 to 0.6 μm & red to 0.6 to
0.7 μm. Wavelength of the Ultraviolet (UV) energy adjoins the blue end of visible portion of the
spectrum. Wave length of the Infrared (IR) wave adjoins red end of visible portion of the
spectrum. According to their wavelength IR waves are classified as near IR (from 0.7 to 1.3 μm),
mid IR (from 1.3 to 3 μm) and thermal IR (beyond 3 to 14 μm).

9
GIS Reader

The relationship between the frequency & energy of quanta expressed as follows:
Q = hν ------------------------ (2)
Where, Q = Energy of a quantum measured in Joules (J)
H = Planck constant (6.626 X 10-34 J-sec.)
By substituting, ν = C/λ (from equation no. 1) in equation no 2.
λ = hC/ Q --------------------- (3)
From the above equation no. 3 it’s clear that the energy of a quantum is inversely proportional to
its wavelength, i.e. the longer the wavelength involved, the lower its energy content & vise-versa.
This relationship has very important implications to remote sensing because it suggest that it is
more difficult to sense longer wavelength energy such as microwave emission than shorter
wavelength energy such as thermal IR by the sensor.
Substances may have color because of their differences in energy levels.
Sources of electromagnetic radiation energy:

The Sun is the main initial source of EMR recorded by the remote sensing system. Although all
objects above absolute zero (-273° C or 0 K) radiate EMR, including water, soil, vegetation etc.
The thermonuclear fusion taking place on the surface of the Sun yields a continuous spectrum of
electromagnetic energy. The 6000 K temperature of this process produces a large amount of
relatively shorter wavelength (dominantly 0.483 μm) energy that travels through the vacuum of
space & atmosphere of the earth at the speed of light. Some of this energy is intercepted by the
earth surface. The earth may reflect some of the energy directly back out to space or it may absorb
the short wavelength energy & then reemit it at a longer wavelength.

EMR & Remote Sensing:


Remote Sensing is performed using an instrument, often referred to as a sensor. These sensors
record data of reflection and/or emission of electro magnetic energy from Earth’s Surface feature.
Radiant energy emitted by or reflected from ground features is transmitted to the sensing
instrument in the form of waves. Some sensors, such as radar systems, supply their own source of
energy to illuminate features of interest.

Electro Magnetic Energy & Atmospheric Particle:


Radiant energy from the Sun propagated towards earth through the vacuum &
atmosphere almost at the speed light. Unlike vacuum in which nothing happens, but the
atmosphere may affect the properties of EMR such as speed of radiation, wavelength, its intensity

10
GIS Reader

& spectral distribution, even its direction of propagation also. Although net effect of the
atmosphere varies according to their path length & magnitude of energy level of EMR being
sensed. Atmospheric condition also influenced the EMR. Scattering & absorption of EMR in the
atmosphere are the primary cause of these effects.

Scattering:
The atmosphere contains aerosol particles & gas molecules that scatter the electromagnetic energy
according to their wavelength. Aerosol particles such as water vapor, suspended particulate mater
(SPM) & smoke etc. in the atmosphere tried to scatter the EMR. Scattering causes change in the
direction & intensity of radiation. Generally, scattering decreases with the increase in wavelength
of EMR. Therefore, ultraviolet radiation near the blue end (0.4 – 0.5 μm) of the visible portion is
scattered much more than the radiation in the longer visible wavelengths. Consequently, we see
blue sky in a clear day.

Absorption:
Atmospheric absorption results in the effective lass of electromagnetic energy even more than
scattering. The gas molecules such as water vapor (H2O), carbon dioxide (CO2), SPM, ozone
(O3) etc. absorb considerable amount of EMR. However, absorption is selective by wavelengths.
EMR with wavelength shorter than 0.3 μm is completely absorbed by the ozone (O3) in the upper
atmosphere, whereas water particles in clouds absorb EMR at wavelengths less than about 0.3
μm.

Energy Interaction with Earth Surface Features:


According to principle of conservation of energy,
Ei (λ) = Er (λ) + Ea (λ) + Et (λ)
Where
Ei = incident energy
Er = reflected energy
Ea = absorbed energy
Et = transmitted energy

While dealing with the energy interaction of the EMR with surface features, we have to consider
two points like material type and the condition of the object and the variation in the wave length
of EMR spectrum. Because these factors determine the proportions of energy reflected, absorbed,
and transmitted. Thus two objects may be indistinguishable in one spectral range and vary in
another wave length band. Most of the remote sensing systems operate in the wavelength regions
in which reflected energy predominates, the reflectance properties of earth features are very
important. Hence it is useful to of the e energy balance relationship expressed by previous
equation in the form
Er (λ) = [Ei (λ) +Et (λ)]
That is reflected energy is equal to the energy incident on a given feature reduced by the energy
that is either absorbed or transmitted by that feature.
The geometric manner in which an object reflects energy is also an important consideration and it
depends upon the roughness of the object. Specular reflectors are flat surfaces that manifest mirror
like reflections, where the angle of reflection equals the angle of incidence. Diffuse reflectors are
rough surfaces that reflect uniformly in all directions. Most of the surfaces are neither perfectly
specular nor diffuse reflectors. Their characteristics are somewhat between the two extremes.
Reflection of the EMR is dictated by surface roughness in comparison to the wavelength of the
energy incident upon it. When the wavelength of incident energy is much smaller than the surface
height variations or the particle sizes that make up a surface, the reflection from the surface is
diffuse.

11
GIS Reader

Diffuse reflections contain spectral information on the colour of the reflecting surface, whereas
specular reflectances do not. Hence, in remote sensing, we are most often measuring the diffuse
reflectance properties of terrain features.
The reflectance characteristics may be measured by the portion of energy that is reflected. It is
mathematically defined as ρλ= Er (λ)/Ei (λ) or (energy reflected from the object/ energy of
wavelength incident upon the object)X 100

Spectral Reflectance Curve:


It is the graph of the spectral reflectance of an object as a function of wavelength. The
configuration of the spectral reflectance curve shows the spectral characteristics of an object and
has a strong influence on the choice of wave length regions in which remote sensing data are
acquired for a particular application.

Conclusion:
Better understanding about the electro magnetic spectrum is necessary for remote sensing.
Because in remote sensing we are utilizing various bands for getting aerial photographs and
satellite images. The nature of different bands and its interaction with the atmosphere should be
analysed to get proper results.

12
GIS Reader

3 Interaction of electromagnetic radiation with the earth


surface and atmosphere
Introduction

Visible light, radio waves, heat, ultra violate rays and x-rays are various forms of
Electromagnetic energy. All this energy radiates in accordance with basic wave theory.
Wave theory describes electromagnetic energy as travelling in a harmonic, sinusoidal fashion at
the ‘velocity of light, c.’ where,
C=λv
Where v = wave frequency, the number of peaks passing a fixed points in space per unit time.
λ= wavelength, the distance from one wave peak to the next.
Since c is constant ( c= 3X 108), λ and v are inversely proportional to each other.
Electromagnetic waves are categorised by their wavelength location within the electromagnetic
spectrum.The unit to measure wavelength along the spectrum is the micrometer (µm).
Micrometer=1X10-6 m

ELECTROMAGNETIC SPECTRUM
Courtesy: http://chesapeake.towson.edu/data/all_electro.asp

The Electromagnetic Spectrum:

When all of the possible forms of radiation are classified and arranged according to wavelength or
frequency, the result is the Electromagnetic Spectrum. The electromagnetic spectrum includes
types of radiation that range from extremely low energy, long wavelength, low frequency energy
like Radio energy to extremely high energy, short wavelength, high frequency energy types such
as x-ray and Gamma Ray radiation.

13
GIS Reader

Energy interaction in the atmosphere-

The atmosphere can have a profound effect on among other things, the intensity and spectral
composition of radiation available to any sensing system.
These effects are caused principally through the mechanism of atmospheric scattering and
absorption.
1) Scattering

2) Absorption

Scattering:
Atmospheric scattering is the unpredictable diffusion of radiation by particles in the atmosphere.
1) Rayleigh scatter- Rayleigh scatter happens when the radiation interacts with atmospheric
molecules and other tiny particles that are much smaller in diameter than the wavelength
of interacting radiations.

The effect of Rayleigh scatter is inversely proportional to 4th power of wavelength. Hence short
wavelengths scatter much more than long wavelengths.
Rayleigh scatter is a primary cause of ‘haze’ in imagery. A photograph taken from high altitude
appears bluish grey.
2) Mie scatter- Mie scatter happens when atmospheric particle diameters essentially equal
the wavelength of the energy being sensed. Water vapour and dust are major causes of
Mie scatter.

3) Non selective scatter- Non selective happens when the diameters of the particles causing
scatter are much larger than the wavelengths of the energy being sensed. Water droplets
are major cause of such scatter.

Absorption:

Atmospheric absorption results in the effective loss of energy to atmospheric constituents. Some
examples of the most efficient absorbers of solar radiation are water vapors, carbon dioxide and
ozone. The wavelength ranges in which the atmosphere is particularly transmissive of energy are
referred to as ‘atmospheric windows’.
The interaction and the interdependence between the primary sources of electromagnetic energy,
the atmospheric index through which source energy may be transmitted to and from earth surface
features and the spectral sensitivity of the sensors available to detect and record the energy. The
choice of spectral range of the sensor has to be based on the manner in which the energy interacts
with the features under investigation.

Energy interactions with earth surface features-


Electromagnetic waves that originate on the sun are radiated through space and eventually enter
the Earth's atmosphere. In the atmosphere, the radiation interacts with atmospheric particles,
which can absorb, scatter, or reflect it back into space. Much of the sun's high-energy radiation is
absorbed by the atmosphere, preventing it from reaching the Earth's surface. This absorption of
energy in the upper atmosphere is an important factor in allowing life to flourish on the Earth.
Atmospheric particles such as dust, sea salt, ash, and water droplets will reflect energy back into
space.
When electromagnetic energy is incident on any given earth features three fundamental energy
interactions with the feature are possible. Various fractions of the energy incident on the element
are reflected, absorbed and/ or transmitted.

14
GIS Reader

As per law of conservation of energy,


E1(λ)=ER(λ) + EA(λ) + ET(λ)
Where,
EI= incident energy
ER= reflected energy
EA= absorbed energy
ET= transmitted energy

Courtesy: http://chesapeake.towson.edu
Proportions of energy reflected, absorbed, and transmitted will vary for different earth features,
depending on their material type and conditions. within a given feature type, proportions of
reflected, absorbed and transmitted energy will vary at different wavelengths. Within the visible
portion of the spectral , these spectral variations result in the visual effect called ‘colour’.
ρλ = ER(λ) / EI (λ)
=(energy of wavelength λ reflected from the object / energy of wavelength incident upon
the object )x 100
Reflection is a function of the surface roughness of the object. Specular reflectors are flat surfaces
that manifest mirrorlike reflections, where the angle of reflection equals the angle of incidence.
Diffuse of Lambertian reflectors are rough surfaces that reflect uniformly in all
directions.geometric character of specular, near specular, near diffuse and diffuse reflectors. the
surface’s roughness in comparison to the wavelength of the energy incident upon it. When the
wavelength of incident energy is much smaller than the surface height variations or the particle
sizes that make up a surface, the reflection from the surface is diffuse.

Courtesy: http://chesapeake.towson.edu/data/all_electro.asp

Diffuse reflections contain spectral information on the colour of the reflecting surface, whereas
specular reflections do not. The reflectance characteristics of earth surface features may be
qualified by measuring the portion of incident energy is reflected. Function of wavelength if
called the spectral reflectance.

VIS ULTRAVIOLET X-RAYS


Courtesy: http://chesapeake.towson.edu/data/all_electro.asp

Above photographs show an example of remote sensing techniques tha relies on high energy
radiation is to compare views of Sun in various spectral bands.
Spectral reflectance of an object as a function of wavelength is termed as spectral reflectance.

15
GIS Reader

Spectral reflectance of vegetation, soil and water:


Vegetation:

a) Chlorophyll strongly absorbs energy in the wavelength bands centered at about 0.45 and
0.67 µm. Hence, our eyes perceive healthy vegetation as green in colour because of the
very high absorption of blue and red energy by plant leaves and the very high reflection
of green energy.

Simulated normal colour photograph Simulated colour IR photograph


Soil:

a. Some of the factors affecting soil reflectance are moisture content, soil texture , surface
roughness , presence of iron oxide and organic matter content. These factors are complex
variable and interrelated. Soil moisture content is strongly related to the

b. Soil texture: coarse, sandy soils are usually well drained, resulting in low moisture
content and relatively high reflectance; poorly drained fine-textured soils will generally
have lower reflectance. Coarse textured soils will appear darker than fine textured soils.

Slope potential Soil erosion


Integration of remote sensing data in geographic information system
Water:

a. Clear water absorbs relatively little energy having wavelengths less than about 0.6 µm.
Reflectance changes with change in turbidity, chlorophyll concentration of water.

Spectral response pattern:

• Spectral responses measured by remote sensors over various features often permit an
assessment of condition of the features. These responses are known as spectral signatures.

• Temporal effects change the spectral characteristics.

16
GIS Reader

• The features which show different characteristics at different geographic locations and
given point of time cause spatial effects.

Atmospheric influences on spectral response patterns:


Spectral response patterns are influenced by the atmosphere. The atmosphere reduces the energy
eliminating the ground object related to the reflectance of the ground object and the incoming
radiation (irradiance)
L tot= (ρET / π) + LP
Where
Ltot= total spectral radiance measured by sensor
ρ=reflectance of object
E= irradiance on object, incoming energy
T= transmission of atmosphere
LP= path radiance, from the atmosphere and not from object
Irradiance results from two sources
1) Directly reflected sunlight

2) Diffused skylight

Irradiance varies with the seasonal changes in solar elevation angle and the changing distance
between earth and sun.

17
GIS Reader

4 Atmosphere windows, physical basis of signatures -


vegetation, soil, water bodies
1 Remote sensing
Remote sensing is the science and the art of obtaining information about an object, area or
phenomena through the analysis of data acquired by a device that is not in the contact with the
object, area or phenomena under investigation. (Lilesand and Kiefer, 1994)
Using various sources data is collected and analyzed to obtain information about the object, area
or phenomena under investigation. The sensors acquire data on the way various earth surface
features emit and reflect electromagnetic energy, and these data are analyzed to provide
information about the resources under investigation.
The basic two processes involved are data acquisition and data analysis. The elements of data
acquisition process are:
1. Energy sources
2. Propagation of energy through atmosphere
3. Energy interaction with earth’s surface features and retransmission of energy through the
atmosphere
4. Recording of energy by airborne and/or space borne sensors
5. Data transmission and processing
6. Resulting in the generation of sensor data in pictorial and/or digital form

The data analysis process involves examining the data using various viewing and interpretation
devices to analyze pictorial data and/or a computer to analyze digital sensor data. With the aid of
reference data, the analyst extracts information about the type, extent, location and condition of
various resources over which the sensor data were collected. This data is then compiled generally
in the form of hard copy maps and tables or as computer files that can be merged with other layers
of information in the geographic information system (GIS). Finally, the information is presented
to the users who apply it to their decision making process.
Energy Sources and Radiation Principles
Visible light refers to only one of the many forms of electromagnetic energy, others being radio
waves, heat, ultraviolet waves and x-rays. All this energy is assumed to be inherently similar,
radiating in accordance with the basic wave theory.
In remote sensing, electromagnetic waves are categorized by their wavelength location within the
electromagnetic spectrum. Most prevalent units to measure wavelength along the spectrum are
micrometer (µm) - a unit of length equivalent to one-millionth of a meter or, nanometers (nm), a
unit of length equivalent to one-billionth of a meter.
Although names (such as ultraviolet and microwave) are generally assigned to regions of the
electromagnetic spectrum for convenience, there exists no clear cut dividing line between one
nominal spectral region and the next. At the very energetic, (high frequency; short wavelength)
end are gamma rays and x-rays. Radiation in the ultraviolet region extends from about 1 nm to
about 0.36 µm. It is convenient to measure the mid-regions of the spectrum in these two units:
micrometers (µm), a unit of length equivalent to one-millionth of a meter or, nanometers (nm), a
unit of length equivalent to one-billionth of a meter. The visible region occupies the range
between 0.4 and 0.7 µm or, its equivalents of 400 to 700 nm. The infrared region (IR), spans
between 0.7 and 100 µm. At shorter wavelengths (near 0.7 µm) infrared radiation can be detected
by special film, while at longer wavelengths it is felt as heat.
Major regions of electromagnetic spectrum
Region name Wavelength Comments
Gamma ray < 0.03 nm Entirely absorbed by the earth’s atmosphere and not
available for remote sensing

18
GIS Reader

X –ray 0.03 to 30 nm Entirely absorbed by the earth’s atmosphere and not


available for remote sensing
Ultraviolet 0.03 to 0.4 µm Wavelengths from 0.03 to 0.3 µm absorbed by the
ozone layer
Photographic 0.3 to 0.4 µm Available for remote sensing the earth. Can be
ultraviolet imagined with photographic film
Visible 0.4 to 0.7 µm Available for remote sensing the earth. Can be
imagined with photographic film
Infrared 0.7 to 100 µm Available for remote sensing the earth. Can be
imagined with photographic film
Reflected infrared 0.7 to 3 µm Available for remote sensing the earth. Near infrared
0.7 to 0.9 µm. Can be imagined with photographic
film
Thermal infrared 3 to 14 µm Available for remote sensing the earth. This
wavelength cannot be captured with photographic
film. Instead mechanical sensors are used to image
this wavelength band.
Microwave or Radar 0.1 to 100 cm Longer wavelengths of this band can pass through
clouds, fog and rain. Images using this band can be
made with sensors that actively emit microwaves.
Radio > 100 cm Not normally used for remote sensing earth

Most common sensing systems operate in one or several of the visible, IR or microwave portions
of the spectrum. Within the IR portion of the spectrum, it should be noted that only thermal IR
energy is directly related to the sensation of heat, near and mid – IR energy are not.
Also, as per the wave theory – the longer the wavelength involved the lower is its energy content.
This has important implications in remote sensing from the standpoint that naturally emitted long
wavelength radiation, such as microwave emission from terrain features, is more difficult to sense
that radiation of shorter wavelengths, such as emitted thermal IR energy.
The sun remains the most obvious source of electromagnetic radiation for remote sensing.
However, all matters at temperatures above absolute zero (0K or – 273oC) continuously emit
electromagnetic radiation, e.g. terrestrial objects. However, the energy radiating from an object,
among other things is a function of surface temperature (as expressed by Stefan – Boltzmann law)
affecting the spectral distribution in due course.
The earth’s ambient temperature (i.e. temperature of surface materials such as soil, water and
vegetation) is about 300 K (27oC). This radiance from earth features, thus occurs at a wavelength
of 9.7 µm (as per Wien’s Displacement law) and is termed as “thermal infrared” energy. This
wavelength energy emitted by ambient earth features can be observed only with a non-
photographic sensing system.
Certain sensors, such as radar systems, supply their own source of energy to illuminate features of
interest. These systems are termed as ‘active systems’, in contrast to ‘passive systems’ that sense
naturally available energy.

Energy Interactions in the Atmosphere


Irrespective of its source, all radiation detected by sensors passes through some distance or path
length of atmosphere, varying widely. The net effect of the atmosphere varies with these
differences in path length and also varies with the magnitude of the energy signal being sensed,
the atmospheric conditions present, and the wavelengths involved. These effects are principally
caused through the mechanisms of atmospheric scattering and absorption.
Scattering
Atmospheric scattering is the unpredictable diffusion of radiation by particles in the atmosphere.
Rayleigh scatter – occurs when radiation interacts with atmospheric molecules and other tiny
particles that are much smaller in diameter than that wavelength of the interacting radiation. The

19
GIS Reader

effect of Rayleigh scatter is inversely proportional to the fourth power of wavelength leading to a
stronger tendency to scattering of short wavelengths than long wavelengths. E.g. Sky appearance
as blue during the day time.
Mie Scatter – occurs when atmospheric particle diameter essentially equal the wavelengths of the
energy being sensed. This type of scatter tends to influence longer wavelengths. Water vapour
and dust are major causes of Mie scatter.
Non-selective scatter – occurs when the diameters of particles causing scatter are much larger
than the wavelengths of the energy being sensed. Water droplets, for example cause such scatter.
This scattering is non-selective with respect to wavelength. E.g. White appearance of clouds and
fog in visible wavelengths.
Absorption
In contrast to scatter, atmospheric absorption results in the effective loss of energy to atmospheric
constituents. This normally involves absorption of energy at given wavelengths by mostly water
vapour, carbon dioxide and ozone. Absorption of electromagnetic energy at specific wavelengths
by these gases strongly influence, ‘where we look’ spectrally with any given remote sensing
system.

2 Atmospheric windows:
Referred to as the wavelengths ranges in which, the atmosphere is particularly transmissive of
energy. Remote sensing data acquisition is limited to the non-blocked spectral regions – the
atmospheric windows. The spectral sensitivity range of the eye coincides with both an
atmospheric window and peak level of energy from the sun. Emitted heat energy from the earth is
sensed through the windows at 3 to 5 µm and 8 to 14 µm using such devices as thermal scanners.
Multispectral scanners sense simultaneously through multiple, narrow wavelength ranges that can
be located at various points in the visible through the thermal spectral region. Radar and passive
microwave systems operate through a window in the region 1mm to 1m. The important point to
note is the intersection and interdependence between the primary sources of electromagnetic
energy, the atmospheric windows through which source energy may be transmitted to and from
earth surface features, and the spectral sensitivity of the sensors available to detect and record the
energy.

Energy Interactions with earth surfaces


When electromagnetic energy is incident of any given earth surface feature it might get reflected,
absorbed and/or transmitted. Two points concerning this relationship should be noted:
• First, the proportions of energy reflected, absorbed, and transmitted will vary for different
earth features, depending on their material type and condition. These differences permit
us to distinguish different features on an image.
• Second, the wavelength dependency means that, even within a given feature type, the
proportion of reflected, absorbed and transmitted energy will vary at different
wavelengths.
Thus, two features may be indistinguishable in one spectral range and be very different in another
wavelength band. The fact that many remote sensing systems operate in the wavelength regions
in which reflected energy predominates, the reflectance properties of earth features are very
important which is equal to the energy incident on a given feature reduced by the energy that is
either absorbed or transmitted by that feature. This reflected energy from an object is primarily a
function of the surface roughness of the object.
Specular reflectors are flat surfaces that manifest mirror like reflections, while diffuse reflectors
are rough surfaces that reflect uniformly in all directions. Most earth surfaces are neither
perfectly specular nor diffuse reflectors. However, as diffuse reflections contain information on
the color of reflecting surface, these are the type of reflections measured in remote sensing. The
reflectance characteristics of earth features may be quantified by measuring the portion of incident
energy that is reflected. This is measured as a function of wavelength and is called spectral
reflectance. A graph of the spectral reflectance of an object as a function of wavelength is termed

20
GIS Reader

as a spectral reflectance curve. The configuration of spectral reflectance curves gives us insight
into the spectral characteristics of an object and has a strong influence on the choice of
wavelength region in which remote sensing data are acquired for a particular application.
Experience shows that many earth features of interest can or cannot be identified, mapped and
studied on the basis of their spectral characteristics. This makes it necessary to know and
understand the spectral characteristics of the particular and the factors influencing these
characteristics under investigation any given application.

Spectral Reflectance of Vegetation, soil and water:


The figure shows typical spectral
reflectance curves for three basic
types of earth features; healthy
green vegetation, dry bare soil
(gray-brown loam), and clear
lake water. The lines in this
figure represent average
reflectance curves compiled by
measuring a large sample of
feature. Here the configuration
of these curves is an indicator of
the type and condition of the
feature to which they apply. The
reflectance of individual features
will vary considerably above and
below the average, but these
curves demonstrate some
fundamental points concerning
spectral reflectance.
For eg. Spectral reflectance curves for healthy green vegetation almost always manifest
the “peak-and-valley” configuration illustrated in the figure.
The valleys in the visible portion of the spectrum are dictated by the pigments in the leaves.
Chlorophyll absorbs energy in the wavelength bands centered at about 0.45 and 0.67 µm. Hence
our eyes perceive healthy vegetation as green in color because of the very high absorption of blue
and red energy by plant leaves and the very high reflection of green energy. Contrary to this, if a
plant is subjected to certain stress which inhibits its normal growth and productivity decreasing
chlorophyll production red reflectance increases to the point that we see the plant turn yellow.
As we go from the visible to the near – IR portion of the spectrum at about 0.7 µm, a plant leaf
typically reflects 40 to 50 percent of the energy incident upon it. Most of the remaining energy is
transmitted, since absorption in this spectral region is minimal. Plant reflectance in the range of
0.7 to 1.3 µm results primarily from the internal structure of plant leaves. The variability of this
structure between plant species, reflectance measurements in this range permit us to distinguish
between species. Likewise, many plant stresses alter the reflectance in this region, and sensors
operating in this range are often used for vegetation stress detection.
Beyond 1.3 µm, energy incident upon vegetation is essentially absorbed or reflected, with little or
no transmittance of energy. Dips in reflectance occur at 1.4, 1.9 and 2.7 µm because water in the
leaf absorbs strongly at these wavelengths. Reflectance peak occurs at about 1.6 and 2.2 µm,
while at the range beyond 1.3 µm, leaf reflectance is approximately inversely related to the total
water present in a leaf. This total is a function of both the moisture content and the thickness of a
leaf.

Soil:
As shown in figure, soil shows considerably less peak and valley variation in reflectance, i.e.
factors that influence soil reflectance act over less specific spectral bands. Some of the factors

21
GIS Reader

affecting soil reflectance are moisture content, soil texture, surface roughness, presence of iron
oxide and organic matter content. E.g. the presence of moisture in soil will decrease its
reflectance. Soil moisture content is strongly related to the soil texture: coarse, sandy soils are
usually well drained resulting in low moisture content and relatively high reflectance; poorly
drained fine textured soils will generally have lower reflectance. In the absence of water, the soil
itself might exhibit reverse tendency. Coarse textured soils will appear darker than fine textured
soils. Two other factors that reduce soil reflectance are surface roughness and content of organic
matter.

Water:
Considering the spectral reflectance of water, probably the most distinctive characteristic is the
energy absorption at near IR (NIR) wavelengths and beyond, i.e. water absorbs energy in these
wavelengths irrespective of its typology such as lake, streams or water contained in vegetation or
soil. Locating and delineating water bodies with remote sensing data are done most easily in near
IR wavelengths because of this absorption property.
Clear water absorbs relatively little energy having wavelengths less than about 0.6 µm. High
transmittance typifies these wavelengths with a maximum in the blue-green portion of the
spectrum.
Based on the change in the turbidity of water, (because of the presence of organic or inorganic
materials), transmittance changes and therefore reflectance changes dramatically. For e.g. Water
containing large quantities of suspended sediments resulting from soil erosion usually has much
higher visible reflectance then other “clear” water in the same geographic area. Likewise, the
reflectance of clear water changes with the chlorophyll concentration tend to decrease water
reflectance in blue wavelengths and increase it is green wavelengths. These changes have been
used to monitor the presence and estimate the concentration of algae via remote sensing data.
Many water important characteristics, such as dissolved oxygen concentration, pH and salt
concentration, cannot be observed directly through changes in water reflectance. However, such
parameters sometimes correlate with observed reflectance. In short, there are many complex
inter-relationships between the spectral reflectance of water and particular characteristics which
requires one to use appropriate reference data to correctly interpret measurements made over
water.

Spectral Response Patterns


As spectral responses measured by remote sensors over various features often permit an
assessment of the type and/or condition of the features, these responses have often been referred
to as spectral signatures. Spectral reflectance and spectral emittance curves (for wavelengths
greater than 3.0 µm) are often referred to in this manner. The physical radiation measurements
acquired over specific terrain features at various wavelengths are also referred to as the spectral
signatures for those features.
Although many earth features manifest different spectral reflectance and/or emittance
characteristics, these characteristics result in spectral ‘response patterns’ rather than in spectral
‘signatures’. The reason for this is that the term signature tends to imply a pattern that is absolute
and unique. This is not the case with spectral patterns observed in the natural world which when
measured by remote sensors may be quantitative but they are not absolute. They may be
distinctive but not necessarily unique.
The variability of spectral signatures might cause sever problems in remote sensing data analysis
if the objective is to identify various earth features spectrally, except for cases to identify the
condition of various objects of the same type we may have to rely on spectral response pattern
variability to derive this information.
Apart from these characteristics – temporal effects i.e. any factors that change the spectral
characteristics of a feature over time and spatial effects i.e. factors that cause the same type of
features at a given point of time to have different characteristics at different geographic locations
also may effect remote sensing data analysis. These effects might complicate the issue of

22
GIS Reader

analyzing spectral reflectance properties of earth features but also be the keys to gleaning the
information sought in an analysis. E.g. – the process of change detection is premised on the
ability to measure temporal effects. An example of this being the change in suburban
development near a metropolitan area by using data obtained on two different dates.
An example of a useful spatial effect is the change in the leaf morphology of trees when they are
subjected to some form of stress. So, even though a spatial effect may complicate the analysis, at
times this effect may add just what is important in a particular application.

Atmosphere influences on spectral response patterns


The energy recorded by a sensor is always modified to some extent by the atmosphere between
the sensor and the ground. The atmosphere affects the ‘brightness’ or radiance, recorded over any
given point on the ground in two almost contradictory ways:
• First, it reduces the energy illuminating a ground object (and being reflected from it)
• Second, the atmosphere acts as a reflector itself, adding scattered, extraneous path
radiance to the signal detected by the sensor.

The dominance of sunlight versus skylight in any given image is strongly dependent on
weather conditions, while irradiance varies with the seasonal changes in solar elevation angle and
the changing distance between the earth and the sun.

23
GIS Reader

5 sensors-pushbroom sensors, whiskbroom sensors


sensors
Remote sensors can be grouped according to the number of bands and the frequency range of
those bands that the sensor can detect. Common categories of remote sensors include
panchromatic, multispectral, hyperspectral, and ultraspectral sensors.

Panchromatic sensors cover a wide band of wavelengths in the visible light or near infrared light
spectrum. An example of a single band sensor of this type would be a black and white
photographic film camera.

Multispectral sensors cover two or more spectral bands simultaneously typically from 0.3 m to 14
m wide.

Hyperspectral sensors cover spectral bands narrower than multispectral sensors. Image data from
several hundred bands are recorded at the same time offering much greater spectral resolution
than a sensor covering wider bands.

1.1 Multispectral Sensors


Multispectral scanners measure reflected electromagnetic energy by scanning the earth’s surface.
This results in digital image data, of which the elementary unit is a picture element: pixel. As the
name suggests, the measurements are made for different ranges of the EM spectrum. After the
aerial camera it is the most commonly used sensor.

Two types of multispectral scanners are distinguished: the whiskbroom scanner and the
pushbroom scanner.

Whiskbroom Scanner
A combination of a single detector plus a rotating mirror can be arranged in such a way that the
detector beam sweeps in a straight line over the earth across the track of the satellite at each
rotation of the mirror. In this way, the earth’s surface is scanned systematically line by line as the
satellite moves forward. Because of this sweeping motion, the whiskbroom scanner is also known
as the across-track scanner. The first multispectral scanners applied the whiskbroom principle.
Today many scanners are still based on this principle: NOAA/AVHRR, Landsat/TM, IRS/LISS.

Operating of Whiskbroom scanner


Using a rotating or oscillating mirror, such systems scan the terrain along scan lines that are at
right angles to the flight line. This allows the scanner to repeatedly measure the energy from one
side of the aircraft to the other. Data are collected within an arc below the aircraft typically of 90º
to 120º. Successive scan lines are covered as the aircraft moves forward, yielding a series of
contiguous, or just touching, narrow strips of observation comprising a two-dimensional image of
rows (scan lines) and columns.

At any instant in time, the scanner sees the energy within the system’s instantaneous field of view
(IFOV). The IFOV is normally expressed as the cone angle within which incident energy is
focused on the detector.

2.pushbroom scanner

24
GIS Reader

Along-track scanners also use the forward motion of the platform to record successive scan lines
and build up a two-dimensional image, perpendicular to the flight direction. These systems are
also referred to as pushbroom scanners, as the motion of the detector array is analogous to the
bristles of a broom being pushed along a floor.

The pushbroom scanner is based on the use of Charged Coupled Devices (CCDs) for measuring
the electromagnetic energy. A CCD-array is a line of photo-sensitive detectors that function
similar to solid state detectors. A single element can be as small as 5 µm. Two-dimensional CCD
arrays used in remote sensing are more sensitive and have larger dimensions. The first satellite
sensor using this technology was SPOT-I HRV. High resolution sensors such as IKONOS and
Orbview3 also apply the pushbroom principle. This enables a longer period of measurement over
a certain area, resulting in less noise and a relatively stable geometry. Since the CCD elements
continuously measure along the direction of the platform this scanner is also referred to as along-
track scanner.

The pushbroom scanner records one entire line at a time. The principal advantage over the
whiskbroom scanner is that each position (pixel) in the line has its own detector.

Operation of Pushbroom Scanners


Along-track or pushbroom scanners record multispectral image data along a swath beneath an
aircraft. Forward motion of the aircraft is utilized to build up a two-dimensional image by
recording successive scan lines that are oriented at right angles to the flight direction. However,
there is a distinct difference between along-track and across-track systems in the manner in which
each scan line is recorded. In an along-track system, there is no scanning mirror. Instead, a linear
array of detectors is used. Linear arrays typically consist of numerous charge-coupled devices
(CCDs) positioned end to end. Each detector element is dedicated to sensing the energy in a
single column of data. The size of the ground resolution cell is determined by the IFOV of a
single detector projected on the ground.

Linear array CCDs are designed to be very small, and a single array may contain over 10,000
individual detectors. Each spectral band of sensing requires its own linear array. Normally, the
arrays are located in the focal plane of the scanner such that each scan line is viewed by all arrays
simultaneously. Linear array systems afford a number of advantages over across-track mirror
scanning systems. Firstly, linear arrays provide the opportunity for each detector to have a longer
dwell time over which to measure the energy from each ground resolution cell. This enables a
stronger signal to be recorded (and thus, a higher signal to signal noise ratio) and a greater range
in the signal levels that can be sensed, which leads to better radiometric resolution. In addition,
the geometric integrity of linear array systems is greater
because of the fixed relationship among detector elements
recording each scan line. The geometry along each row of
data (scan line) is similar to an individual photo taken by an
aerial mapping camera. The geometric errors introduced
into the sensing process by variations in the scan mirror
velocity of across-track scanners are not present in along-
track scanners. Because linear arrays are solid state micro-
electronic devices, along-track scanners are generally
smaller in size and weight and require less power for their
operation than across-track scanners. Also, having no
moving parts, a linear array system has higher reliability
and longer life expectancy.

25
GIS Reader

They use a linear array of detectors (A) located at the focal plane of the image (B) formed by lens
systems (C), which are "pushed" along in the flight track direction (i.e. along track). Each
individual detector measures the energy for a single ground resolution cell (D) and thus the size
and IFOV of the detectors determines the spatial resolution of the system. A separate linear array
is required to measure each spectral band or channel. For each scan line, the energy detected by
each detector of each linear array is sampled electronically and digitally recorded.

One disadvantage of linear array systems is the need to calibrate many more detectors. Another
current limitation to commercially available solid state arrays is their relatively limited range of
spectral sensitivity. Linear array detectors that are sensitive to wavelengths longer than the mid-
IR are not readily available.

Spectral Characteristics
To a large extent, the characteristics of a solid state detector are valid for a CCD-array. In
principle, one CCD-array corresponds to a spectral band and all the detectors in the array are
sensitive to a specific range of wavelengths. With current technologies, CCD array sensitivity
stops at 2.5 µm wavelength. If longer wavelengths are to be measured, other detectors need to be
used. One drawback of CCD arrays is that it is difficult to produce an array in which all the
elements have similar sensitivity. Differences between the detectors may be visible in the
recorded images as vertical banding.

Geometric Characteristics
For each single line, pushbroom scanners have a geometry similar to that of aerial photos (which
have a ‘central projection’). In case of flat terrain, and a limited total field of view (FOV), the
scale is the same over the line, resulting in equally spaced pixels. The concept of IFOV cannot be
applied to pushbroom scanners.

Typical for most pushbroom scanners is the ability for off-track viewing. In such a situation, the
scanner is pointed towards areas to the left or right of the orbit track (off-track) or to the back or
forth (along-track). This characteristic two advantages: it is used to produce stereo-images, and
it can be used to image an area that is not covered by clouds at that particular moment. When
applying off-track viewing, similar to oblique photography, the scale in an image varies and
should be corrected for.
3.Hyperspectral imaging
Hyperspectral imaging is a technique that combines both conventional imaging and spectroscopy.
Using this technology, both the spatial and spectral information of an object can be acquired. The
imaging produces 3D images or Hyperspectral image cubes and uses optical elements, lenses,
spatial filters and image sensors to capture the content at multiple wavelengths.

Almost all sensors that are multispectral in function have had to sample the EM spectrum over a
relatively wide range of wavelengths in each discrete band. These sensors therefore have low
spectral resolution. This mode is referred to as broad-band spectroscopy. Spectral resolution can
be defined by the limits of the continuous wavelengths (or frequencies) that can be detected in the
spectrum. In remote sensors an interval of bandwidth of 0.2 µm in the Visible-Near IR would be
considered low spectral resolution and 0.01 µm as high resolution. (The term has a somewhat
different meaning in optical emission spectroscopy, where it refers to the minimum spacing in µm
or Angstroms between lines on a photographic plate or separable tracings on a strip chart.)

26
GIS Reader

Rremote sensors that can have high spectral resolution are called hyperspectral imagers. With
these hyperspectral curves, it is practical now do rigorous analysis of surface compositions over
large areas. Moreover, data can be displayed either as spectral curves with detail similar to those
on the preceding page or as images similar to those obtained by Landsat, SPOT, etc. With spectral
curves we capture the valuable information associated with diagnostic absorption troughs, and
with images we get relatively pure scenes, colorized (through color compositing) from intervals
that represent limited color ranges in the visible or in false color for the near-IR (NIR).

Applications of Multi- and hyperspectral imaging


Multi- and hyperspectral imaging technology is used in environmental monitoring, biological,
earth science, transportation, precision agriculture, and forestry applications to deliver data and
information. Ground-based, attached to microscopes or telescopes, hand-held, airborne and
spaceborne systems are used to observe scenes ranging from microscopic objects (e.g. cancer
cells) up to planets and galaxies. Among typical applications are:

• Precision agriculture/farming (monitoring soil conditions, predicting yield, plant


identification, etc.)
• Plant pathological stress detection and characterization (detecting disease or pest infestation)
• Veterinary (medical diagnoses, condition estimation, etc.)
• Food quality inspection (inspection and sorting of fresh fruits and vegetables, milk and oil
quality inspection, poultry, fish and meat quality inspection, fat estimation in meat, etc.)
• Forestry, vegetation and canopy studies (mapping tree species, tree volume/size/age
estimation, detecting damaged/broken trees, foreign body detection, etc.)
• Eco system monitoring
• Environmental (wetlands, land cover, hydrology, etc.)
• Plume detection and analysis
• Water quality and coral reefs monitoring
• Littoral studies (bathymetry, water clarity, etc.)
• Health care (food safety, medical diagnoses, etc.)
• Biological and chemical detection (detecting and identifying hazardous materials).
• Material identification (natural and man-made materials)
• Mineral exploration and mapping
• Camouflage and concealment detection
• Disaster mitigation
• City planning and real estate
• Traffic ability analysis
• Law enforcement (measuring spill extent and pollutants, tracking discharges of chemicals or
oil, detecting illegal activities in protected areas, etc.)

Applications of multispectral scanner data are mainly in the mapping if land cover, vegetation,
surface mineralogy and and surface water. Multispectral scanners are mounted on airborne and
spaceborne platforms. A multi-spectral scanner operates on the principle of selective sensing in
multiple spectral bands. The range of multispectral scanners range from 0.3 to 14 µm.

27
GIS Reader

The technique has emerged as a very powerful method for continuous sampling of broad intervals
of the spectrum. Such an image consists of about a hundred or more spectral bands that are
adjacent to each other, and the characteristic spectrum of every target pixel is acquired. This
precise information enables detailed analysis of a dynamic environment or any object.

After years of being restricted to laboratories and the defense industry, the commercialization of
these technologies is well and truly underway. However, Frost & Sullivan expects the market to
grow at a slower rate over the short term when compared to the longer term, this is because of
various reasons including lack of awareness of the products, lack of competition and price.
However, advances in technology, data processing algorithms and the increase in competition are
expected to aid in strong penetration into various industrial verticals over the long term.

28
GIS Reader

6 Opto-Mechanical scanners
The imaging sensors board on the satellites were essentially opto-mechanical scanners. Most of
the limitations associated with photographic and TV imaging system are overcome in opto-
mechanical scanners. The principle of operation of an opto-mechanical scanner is shown
schematically in Figure.

Schematic of operation of an opto-mechanical scanner

The radiation emitted or reflected from the scene is intercepted by a scan mirror is inclined at 45
to the optical axis of the telescope. The telescope focuses radiation on to a detector. In this case,
the detector receives radiation from an area on the ground which is determined by the detector
size and focal length of the optics. This is called a picture element or a pixel. By rotating the scan
mirror the detector starts looking at adjacent picture elements on the ground. Thus, information is
collected pixel by pixel by the detector. If such an instrument is mounted on a moving platform-
like an aircraft or a spacecraft, such that the rotation of the scan mirror collects information from a
strip on the ground at right angles to the direction of motion of the platform and also if the
scanning frequency is adjusted such that by the time the platform moves through one picture
element the scan mirror is set to the start of the next scan line, then successive and contiguous
scan lines can be produced. Thus, in cross track direction information is collected from each pixel
(because of the scan mirror motion) to produce one line of image and in the along track direction

29
GIS Reader

successive lines of image in contiguous fashion are produced by the platform motion. The scan
frequency has to be correctly adjusted, depending on the velocity of platform, to produce a
contiguous image.
To produce multispectral imagery the energy collected by the telescope is channeled to a spectral
dispersing system-spectrometer. Such systems which can generate imagery simultaneously in
more than one spectral band are called Multispectral Scanners (MSS). The given figure gives the
functional block diagram of a multispectral scanner.

Thus, the MSS has got a scan mirror, collecting optics, dispersive system (which essentially
spreads the incoming radiation into different spectral bands) and a set of detectors appropriate for
the wavelength regions to be detected. The output of the detectors go through electronic
processing circuits. The data from the scene along with other data like attitude of the platform,
temperatures of the various subsystems etc. are formatted together and the combined information
is either recorded on a magnetic medium (as is usually the case with aircraft sensors) or
transmitted through telemetry for spacecraft sensors. Details of some of the major subsystems to
realize an opto-mechanical scanner are given below.

a) Scanning Systems: In an opto-mechanical imager, the scanning can be carried out either
in the object plane or in the image plane. In the image plane scanner, the scan mirror is
kept after the collecting optics near to the focal plane and the mirror directs each point in
the focal plane to the detector. Obviously such a system requires the collecting optics
corrected for the total field of view, which is quite difficult, especially if a reflective
system has to be used. However, it requires relatively smaller size of the scan mirror.
Though image plane scanning has been used in some of the early opto-mechanical
multispectral scanners due to large field correction required for the total field of view,
image plane scanning is not generally used. Moreover, due to availability of linear array
CCDs the scope of image plane scanning using mechanical systems is decreasing. In the
object plane scanning the rays from the scene fall on to the scan mirror, which reflects
the radiation to the collecting telescope. Here the direction of rays at the collecting optics
remains same irrespective of the scan mirror position. Thus when object plane scanning
is used the collective optics need only be corrected for a small field around the optical
axis. The extent of field correction depends on IFOV, and the distribution of detectors in
the focal plane for reducing scanning frequency or for additional spectral bands.

30
GIS Reader

b) Collecting Optics – collecting optics can be refractive, reflective or a combination of


refractive and reflective elements called catadioptic. When the spectral bands of interest
are spread over a broad wavelength region extending from visible to thermal IR,
reflective optics is used to avoid dispersion. The 3 generally used basic reflective
telescope systems are:
• Newtonian , which consists of a concave paraboloid primary, with a flat mirror as
secondary near the prime focus, so placed to bring the focus to the side of the
telescope.
• Gregorian , which is similar to the cassegrain, except that the secondary is
concave and kept outside the prime focus.
• Cassegrain, which has a concave primary and a convex secondary, is placed
inside the prime focus so that it redirects the rays through a hole in the primary.

Of the 3 configurations the cassegrain system has the smallest tube length for the same
effective focal length and primary mirror diameters. Since it is desirable to keep the tube
length minimum, in order to reduce weight and volume the space borne opto mechanical
scanners generally use the cassegrain configuration as collecting telescope.

c) Spectral dispersion system – the spectral dispersion system could be the commonly used
systems like grating or a prism. There are special beam splitters which selectively
transmit/reflect a particular band of wavelength. T he usage of such beam splitters and
appropriate band pass filters at the detector, facilitates specific spectral band selection.

d) Detectors – different types of detectors are available to cover the entire OIR region. The
detector selection among other things depends on the required spectral response, specific
detectivity, responsivity and response time. The detectors are mainly of 2 types –

Useful spectral ranges for typical detectors (operating temperature of all detectors is 300 K unless
noted).

31
GIS Reader

Thermal detectors – The thermal detector s absorb radiant energy raising the detector temperature
and a parameter of the device which changes with the temperature is detected viz resistance in
case of bolometer, voltage in case of thermocouple.
Quantum Detectors – in the quantum detectors the absorbed photons excite electrons into the
conduction band, changing the electrical characteristics of the responsive elements or the
electrons emitted.

LANDSAT Multispectral Sensors:-

The multispectral scanner system on board the NASA earth resources technology satellite
LANDSAT-1 popularly known as MSS was the first operational satellite-borne opto mechanical
scanner for civilian applications.

Thematic Mapper:-

The thematic mapper is an advanced second generation optomechanical multispectral scanner first
carried onboard LANDSAT-4 TM provides 7 narrow spectral bands covering visible , near
infrared, middle infrared and thermal infrared spectral regions with a 30 m resolution in the
visible, near and middle- infrared bands nad 120 m resolution in the thermal infrared.apart from
the improved spatial and spectral resolution TM provides a factor of 2 improvement over MSS in
the radiometric sensitivity.
The very high resolution radiometer ( VHRR) onboard INSAT is also an opto mechanical
scanner. In this case since the satellite is geostationary and 3 axis stabilized a 2 axis scan mirror is
used to take care of the lack of relative motion of the platform and scene.

32
GIS Reader

7 Platforms for Remote Sensing


1. Remote sensing:
It is the science of obtaining information about a phenomenon without being in physical contact
with it. It deals with the detection and measurement of phenomenon with devices sensitive to
electromagnetic energy. Aircraft and satellites are most common platforms from which the remote
sensing observations are made.

2. Platforms:
As a broad definition platforms can be defined as the vehicles to carry the sensor. It is a stage to
mount or carry a sensor or a camera to acquire the information of the earth’s surface. It is based
on the altitude above the earth’s surface. Three different types of platforms are used which collect
the data or information from earth’s surface and transmit it to an earth receiving station, for their
further analysis and interpretation.
Types of platforms:
3. Ground Observation Platform
4. Air Borne Observation Platforms
5. Space Borne Observation platforms

6. Ground Based Observation Platform:


Ground Based Remote Sensing Systems are used for earth resource s studies. These are mainly
used for collecting ground truth data or for laboratory simulation studies.
Ground observation platforms function on the principle of “signal object and signal sensor”
interactions. These studies are made both at laboratory and field levels. These help in the design
and development of sensors for the identification and characterization of characteristic land
features.
The different types of ground platforms includes following:
1. Towers

33
GIS Reader

2. Cherry pickers
3. Portable Masts
4. Hand held platforms
To collect the ground truth, for laboratory and field experiments, portable hand held photographic
cameras and spectro radiometers are used.
To work on high altitudes i.e. about 15 mts from ground cherry pickers with their automatic
recording sensors can be used.
Towers can be raised for placing the sensors at a greater height for observation. Towers can be
dismantled and moved from place to place.
For testing or collecting the reflectance data from field sites portable masts mounted on vehicles
are used. These masts are used to support camera and other sensors.
e.g. Automated data collection platform instrumented to provide data on stream flow
characteristics.

7. Air borne Observation Platform:


Airborne & Space Borne Platforms are used in remote sensing of earth resources.
These can be classified into following:
1. Balloon Borne
2. Aircrafts platforms
ƒ Drones (Short Spy)
ƒ Air Borne High Altitude photography
ƒ Air Borne Multiple Scanner
3. High Altitude Sounding Rocket
Balloon Borne Platform:
In the late 1800’s balloons were used for studying earth’s surface, the atmosphere and celestial
bodies. For that purpose balloons were developed which can go and can take up to altitude of
49km. Such balloons are designed to check the performances of sensors and carriers at different
altitudes. But due to meteorological factors e.g. wind velocity the use of balloons was restricted.
Balloons are of two types - Free balloons and tethered balloons
Free Balloons are designed to follow desired trajectory and return to their starting point, after
covering a distance along a pre determined route. These are used for specific application e.g.
aerial photography, nature conservation studies.
Balloon platforms consists of a rigid circular base plate for supporting the entire system, protected
by an insulating and shock proof light casing. It is roll stabilized and temperature controlled.
Camera, multispectral photometer, power supply units and remote control system are the
equipments carried by balloons. The systems are brought back to earth by tearing the carrying
balloon through remote control.
Free balloons are used to obtain high resolution photographs of the planets from an altitude of
25,000 m through remotely controlled astronomical telescopes.
The balloons can also be connected to an earth station by means of tensile wires having high
strength and more flexibility. These are called tethered balloons. The tether lines serves the
additional purpose of carrying the antenna, power line and gas tube.
In case where wind velocity is less than 35 km/hr for an altitude of 8000m, spherical balloons are
used. Natural shaped balloons are restricted to places where the wind velocity is less than 80
km/hr. Streamlined balloons have the capacity to fly nearer to the tether point and can be designed
to withstand a chosen wind pressure for a given payload, flight duration and anticipated life.
These balloons have been successfully used to support aerial cameras for mapping archaeological
sites.
Aircraft Platform:
To obtain good aerial photographs aircrafts are commonly used remote sensor platforms. There
are criteria’s like it should have minimum stability, should be free from vibrations and oscillations
and must be capable of flying at a minimum uniform speed. Ceiling height is the most important
criteria for classifying aircraft.

34
GIS Reader

Drone:
It is a pilot less vehicle more like a miniature aircraft which is remotely controlled from the
ground station. It has a climb rate of 4m/s with an operating altitude of about 0.5 km, a forward
speed of about 100 km/hr and it can also exhibit hovering flight. This sensor platform has a
central body in the shape of a circular tube for carrying the engine, propelling fan, fuel tank and
the sensor system. The tail of the drone has small wing structures and a tail plane with control
mechanisms. The servo meter systems, operating the aerodynamic controls, receive signals related
to the altitude and position of the aerial vehicle from sensors within the drone and from the
ground.
The function of the drone sensors is to maintain the altitude (of the drone) demanded by the
ground control or by a self-contained navigation system. Drone’s payload includes equipment of
photography, infrared detection, radar observation and TV surveillance. The unique advantage of
such a device is that it could be accurately be accurately located above the area for which data
was required. It is an all weather type of platform capable of both night and day observation.

Airborne High Altitude Photography:


Traditionally aircrafts mounted on vibration less platforms, equipped with large format cameras,
were used to acquire aerial photographs of land surface features. Different altitudes of an aircraft
results into images of different scale with different ground resolutions for specific application.
While low altitude aerial photography results in large scale images providing detail information
on the terrain. The high altitude offers smaller scale images covering a larger study area with a
fewer photographs. The only drawback of aircraft photography is that it is restricted to the film
format, however, digital recording systems do not have such limitations.

Airborne Multiple Scanner:


Aircraft platforms are also used for testing the remote sensors under development. The
photographic cameras, electronic imagers, across track scanners and radar and microwave
scanners have been tested over ground truth sites from aircraft platforms in many NASA program.
Signals from the scanners are controlled in flight at the operator console. These signals are
recorded in analog form by a wide band magnetic tape recorder which are later digitized and
reformatted on the ground for digital image processing and information extraction. There are
different types of scanners having different spectral range – ultraviolet, visible, infrared etc.

High Altitude Sounding Rockets:


There are useful in assessing the reliability of the remote sensing techniques with regards to their
dependence on the distance from the target is concerned. Synoptic imageries can be obtained from
such rockets for areas of some 500,000 square kilometers per frame. The high altitude sounding is
fired from a mobile launcher. During the flight its sensors are held in a stable altitude by an
automatic control system. Once the desired scanning work is over from a stable altitude, the
payload and the spent motor are returned to the ground gently by parachute enabling the recovery
of the data/ photographic records. The skylark earth resource rocket is also a platform of this type.
The sensor payload in this case consists of two cameras of the hasselblad type. This rocket system
has been used in surveys over Australia and argentina.

Disadvantages of Airborne Platform:


1. Expensive
2. Seasonally dependent (rainy season poses serious limitation)
3. Cloudy weather is also another draw back
4. Defense clearance for photography over certain areas as well as landing and take off
permission is time consuming affair, hence planning becomes difficult.

Space borne platform:

35
GIS Reader

These are essentially satellite platforms. Since, there are no atmospheric hindrances in space, the
orbits for the space platforms can be defined. The entire earth or a part of it can be covered at
specific intervals.
The mode can be geo stationary – permitting continuous sensing of a portion of the earth or sun
synchronous with a low altitude polar orbit covering the entire earth at the same equator crossing
time. Space borne platforms can also be used to view extraterrestrial bodies without interference
from the earth’s atmosphere.
Synoptic coverage of the earth on a periodic basis with low maintenance expenses is very useful
for natural resource management.
Although the initial investment cost is high but still spacecraft remote sensing is cheaper than
aircraft remote sensing on account of global repetitive service. Since the altitude of an orbiting or
geostationary satellite is very high, the resolution is poor.
Space borne platforms can be classified into following:
1. Low altitude satellites
2. High altitude geostationary satellites
3. Space shuttles
Satellites launched at an altitude of 36,000 km, the angular velocity of the satellite being equal to
that of the earth are called geo stationary satellites. These satellites are stationary over a certain
area and continuously watch the entire hemispherical disc. The coverage is about 1/3 of the earth,
so only 3 satellites are needed to cover the entire earth.
These satellites are mainly used for communication purposes, meteorological applications and for
earth resource management.
Usually the satellites can be classified into two categories:
1. Manned satellite platforms – These are used for rigorous testing of the remote sensors
on board so that they can be finally incorporated in the unmanned satellites.
2. Unmanned satellite platforms – These satellites are space observatories which provide
suitable environment in which the payload can operate the power to perform, the means
of communicating the sensor acquired data and space craft status to the ground stations
and a capability of receiving and acting upon commands related to the space craft control
and operation. The satellite mainframe subsystem includes – the structural sub system,
orbit control sub system, altitude measurement subsystem, power sub system, thermal
control sub system, the telemetry, and storage and telecommand sub system.

36
GIS Reader

8 Orbits: Geo Stationary and polar orbiting satellite


Satellite:

A satellite is an object that orbits around another object in space. There are two types of satellites:
Natural and Man-made (Artificial).

Artificial satellites are man-made robots that are purposely placed into orbit around Earth by
rocket launchers. These satellites perform numerous tasks in communication industry, military
intelligence and scientific studies on both Earth and space.

There are many characteristics that describe any given satellite remote sensing system and
determine whether or not it will be suitable for a particular application. Among the most
fundamental of these characteristics is the satellite’s orbit. Satellites can operate in several types
of Earth orbit. The most common orbits for environmental satellites are

• Geo-stationary satellite orbit: A geostationary satellite, thus, completes one orbit around
the earth in the same amount of time needed for the earth to rotate once about its axis and
remains in a constant relative position over the equator.
• Polar orbit: An orbit with an inclination close to 90° is referred to as near polar because
the satellite will pass near the north and south poles on each orbit.
A satellite in orbit about a planet moves in an elliptical path with the planet at one end of
the foci of the ellipse. As well as providing a synoptic view of regional relationships the satellite
platform can be put into orbit in such a fashion that it will provide repeated coverage of the whole
of the Earth’s surface. Important elements of the orbit include its altitude, period, inclination and
equatorial crossing time.
Orbital Altitude:
Most earth observation satellites have altitudes more than 400 km above the earth surface, while
some operate at approximately 36000 km altitude. The first of these groups are mostly ‘polar or
near-polar orbiting satellites’ (low level satellites) occupying so called ‘sun synchronous orbits’;
the second group are ‘geostationary satellites’ (high level satellites).

Orbit Inclination:
The inclination of a satellite’s orbit refers to the angle at which it crosses the equator. An orbit
with an inclination close to 90° is referred to as near polar because the satellite will pass near the
north and south poles on each orbit. An ‘equatorial orbit’, in which the spacecraft’s ground track
follows the line of the equator, has an inclination of 0°. Two special cases are sun-synchronous
orbits and geostationary orbits.

Fig1: Sun synchronous orbit

40
GIS Reader

A sun-synchronous orbit results from a combination of orbital period and inclination such that the
satellite keeps pace with the sun’s westward progress as the earth rotates. Thus, the satellite
always crosses the equator at precisely the same local sun time. A geostationary orbit is an
equatorial orbit that will produce an orbital period of exactly 24 hrs. A geostationary satellite,
thus, completes one orbit around the earth in the same amount of time needed for the earth to
rotate once about its axis and remains in a constant relative position over the equator.

Orbit configuration:
Geo-stationary orbit / geo-synchronous orbit:

Fig 2: The geostationary satellite orbits at the same rate as the earth, so it remains above a fixed
spot on the equator and monitors one area constantly
(Source: physics.uwstout.edu/wx/wxsat/measure.htm)

Geo-stationary satellites provide the kind of continuous monitoring necessary for


intensive data analysis. They circle the Earth in a geosynchronous orbit, which means they orbit
the equatorial plane of the Earth at a speed matching the Earth's rotation. This allows them to
hover continuously over one position on the surface. The geosynchronous plane is about 35,800
km (22,300 miles) above the Earth, high enough to allow the satellites a full-disc view of the
Earth. Because they stay above a fixed spot on the surface, they provide a constant vigil for the
atmospheric "triggers" for severe weather conditions such as tornadoes, flash floods, hail storms,
and hurricanes. When these conditions develop the Geo-stationary satellites are able to monitor
storm development and track their movements. Geo-stationary satellite imagery is also used to
estimate rainfall during the thunderstorms and hurricanes for flash flood warnings, as well as
estimate snowfall accumulations and overall extent of snow cover. Satellite sensors also detect ice
fields and map the movements of sea and lake ice. Geostationary satellites measure in "real time",
meaning they transmit photographs to the receiving system on the ground as soon as the camera
takes the picture. A succession of photographs from these satellites can be displayed in sequence
to produce a movie showing cloud movement. This allows forecasters to monitor the progress of
large weather systems such as fronts, storms and hurricanes. Wind direction and speed can also be
determined by monitoring cloud movement.
The orbit coverage is dependent upon the type of orbit in which the satellite is placed.
Satellites in geostationary orbit, for e.g., INSAT, can view the Earth as a solid disk from an
altitude of 40,000 km where studies of large cloud formation are required, a geostationary orbit is
ideal to monitor their progress over a large expanse of oceans (once every half hour).
Polar Orbit:

41
GIS Reader

Fig 3: The polar orbiting satellite scans from north to south, and on each successive orbit the
satellite scans a strip further to the west
(Source: physics.uwstout.edu/wx/wxsat/measure.htm)
There are limitations on sensor sizes and apertures that can be placed in space, due to the size and
weight of the payload on the satellite. These problems can be partially overcome using a circular
orbital configuration, with a high inclination. This is known as polar orbit, as it is based on over
flying the poles typically 14 times a day. Precise details of the final orbit configuration depend
upon another factor, the nodal crossing time. This is the point at which an Earth observation
satellite crosses the equator, either heading towards the North or South Pole. Preference for either
of these is determined by the particular requirements of the users for the viewing of the target, at
different sun angles, throughout the year. Additionally, the rotation of the earth underneath the
satellite, combined with natural small variations in the orbit, causes a different part of the earth’s
surface to be viewed on each orbit of the satellite. The orbit can be adjusted to ensure that it
exactly repeats a pass over the same location to study the temporal variations in, for e.g., a land
feature.
The Terra/Aqua satellites are polar orbiting satellites.
True polar orbits are preferred for missions whose aim is to view longitudinal zones under the full
range of illumination conditions.
Oblique orbiting (near polar orbit) satellites are the ones whose orbital planes cross the plane
of the equator at an angle other than 90°. Oblique orbiting satellites may be launched east wards
into direct or prograde orbits or westwards into retrograde orbits. Because the earth is not a
perfect sphere it exercises a gyroscopic influence on satellites in oblique planes such that those in
prograde orbits regress while retrograde orbits advance or precess with respect to the planes of
their initial orbits.

Fig 4: Near polar orbits: Prograde Fig 5: Near polar orbits: Retrograde

The orbital parts traced out by the satellite determine the revisit rate that can be achieved
for the particular ground station. The rate at which the satellite retraces a specific path determines
how frequently measurements can be taken where multi temporal studies are required. These
factors govern the rates at which a satellite will generate information for a data centre.

42
GIS Reader

9 Brief History Of Aerial Photography


First Known Photograph Was Taken In 1827
In 1827, Joseph Nicephoce Niepee, reportedly took the first photograph. Not able to draw he
developed a product that he called Heliographs. His first picture shows a view from his studio
window and required eight hours of exposure time. The picture is difficult to decipher. With the
exposure lasting eight hours, the sun had time to move from east to west, appearing to shine on
both sides of the building.

On January 4, 1829, Niepee entered a partnership


arrangement with Louis Jacques Mande Daguerre but the partnership lasted only a few years with
Niepee dying in 1833.
Daguerre continued their work and in 1839 announced the development of the process called
"daguerrotype." The early daguerreotype pictures had several drawbacks, one of which was the
length of the exposure time. The photograph, taken from the roof of a tall building, might be
considered the first oblique aerial photograph. Taken in 1839, the photograph apparently shows
an empty street in Paris during the middle of the day. Due to the long exposure time, moving
objects such as people walking and wagons moving were not recorded. The one exception is a
man who stopped to have his shoes shined.

Over time the daguerrotype process improved but was eventually replaced by newer and better
processes. In the United States, daguerrotype photographs were popularly called “tintypes.” By
1851, Scott Archer of England developed the process of coating glass plates with sensitized silver
compounds. The plates were referred to as “wet plates” and the process had reduced the exposure
time to one-tenth that of the daguerrotype process.
NADAR CARICATURIZED IN 1862

Once a technique was established for taking pictures, an adequate aerial platform was needed for
taking aerial photographs. The only platforms available at the time were balloons and kites. In
1858, Gaspard Felix Tournachon (later known as "Nadar") captured the first recorded aerial
photograph from a balloon tethered over the Bievre Valley. However, the results of his initial
work were apparently destroyed. On the other hand his early efforts were preserved in a caricature

43
GIS Reader

prepared by Honoré Daunier for the May 25, 1862 issue of Le Boulevard. Nadar continued his
various endeavors to improve and promote aerial photography. In 1859, he contacted the French
Military with respect to taking "military photos" for the French Army's campaign in Italy and
preparing maps from aerial photographs. In 1868 he ascended several hundred feet in a tethered
balloon to take oblique photographs of Paris.

On October 13, 1860, James Wallace Black, accompanied by Professor Sam King, ascended to an
altitude of 1200 feet in King's balloon and photographed portions of the city of Boston. A cable
held the balloon in place. Black, the photographer, made eight exposures of which only one
resulted in a reasonable picture. This is the oldest conserved aerial photograph. He worked under
difficult conditions with the balloon, which although tethered, was constantly moving. Combined
with the slow speed of the photographic materials being used it was hard to get a good exposure
without movement occurring. He also used wet plates and had to prepare them in the balloon
before each exposure. After descending to take on more supplies, King and Black went up again
with the idea of not only covering Boston but also recording the surrounding countryside.
However, they encountered other problems. As they rose, the hydrogen expanded causing the
neck of the balloon to open more. This resulted in the gas flowing down on their equipment and
turning the plates black and useless. In addition, the balloon took off and they landed in some
high bushes in Marshfield, Massachusetts, about thirty miles away from their beginning point. It
was obvious that the balloon possessed problems in being an aerial platform.

M. Arthur Batut took the first aerial photographs using a kite. It was taken over Labruguiere,
France in the late 1880s. The camera, attached directly to the kite, had an altimeter that encoded
the exposure altitude on the film allowing scaling of the image. A slow burning fuse, responding
to a rubber band-driven device, actuated the shutter within a few minutes after the kite was
launched. A small flag dropped once the shutter was released to indicate that it was time to bring
down the kite. Batut took his first aerial photograph in May 1888. However, due to the shutter
speed being too slow, the image was not very clear. After some modification to the thickness of
the rubber band a good shutter speed was obtained.

44
GIS Reader

AERIAL PHOTOGRAPHY FROM 1900- 1914

In 1906, George R. Lawrence took oblique aerial pictures of San Francisco after the earthquake
and fires.

Using between nine and seventeen large kites to lift a huge camera (49 pounds) he took some of
the largest exposures (about 48 x 122 cm or 18 x 48 in.) ever obtained from an aerial platform.
His camera was designed so that the film plate curved in back and the lens fitted low on the front,
providing panorama images. The camera was lifted to a height of approximately 2,000 feet and an
electric wire controlled the shutter to produce a negative. Lawrence designed his own large-
format cameras and specialized in aerial views.

He used ladders or high towers to photograph from above. In 1901 he shot aerial photographs
from a cage attached to a balloon. One time, at more than 200 feet above Chicago, the cage tore
from the balloon, and Lawrence and his camera fell to the ground. Fortunately telephone and
telegraph wires broke his fall; he landed unharmed. He continued to use balloons until he
developed his method for taking aerial views with cameras suspended from unmanned kites, a
safer platform from his perspective. He developed a means of flying Conyne kites in trains and
keeping the camera steady under varying wind conditions. This system he named the 'Captive
Airship'.

In 1903, Julius Neubranner, photography enthusiast, designed and patented a breast-mounted


aerial camera for carrier pigeons. Weighing only 70 grams the camera took automatic exposures
at 30-second intervals along the flight line flown by a pigeon. Although faster than balloons they
were not always reliable in following their flight paths. The birds were introduced at the 1909
Dresden International Photographic Exhibition. Picture postcards of aerial photographs taken over
the exhibition were very popular. They were used at other fairs and for military surveillance.

45
GIS Reader

DEVELOPMENT OF FASTER SPEEDY AND LIGHT WEIGHTED CAMERAS

In order for the pigeons to carry such small cameras and take several pictures in one flight, a new
type film and a smaller camera system were needed. In the 1870s, George Eastman, born in the
rural community of Waterville in upstate New York, was an accountant in Rochester. After
working five years in a bank, he became bored with the monotony of the job. In 1878, he decided
to take a vacation to the island of Santo Domingo and re-evaluate his life. To record his trip he
acquired a wet-plate camera outfit. However, he found the camera and assorted darkroom
equipment to be cumbersome and bulky. He would need a small wagon to carry all of the
materials and equipment, an arrangement not suited for taking pictures on one's vacation. He soon
forgot about the trip to Santo Domingo and became intrigued with the idea of developing a better
film and camera system.

In 1879, Eastman discovered the formula for making a successful gelatin emulsion covered dry-
plate and built a machine for coating dry plates with the emulsion. These developments led to the
invention of rolled paper film. The resulting prints were sharp, clear and free from paper grain
distortion. In 1889, his company, Kodak, introduced flexible celluloid film and the popularity of
photography soared. He now needed a camera to take advantage of the new film. In 1900,
outfitted with a simple lens and the ability to handle rolled film, the one-dollar Kodak box
camera, called the Brownie, made Kodak and photography almost synonymous. Eastman had not
only revolutionized the field of photography but set the stage for new developments in the field of
aerial photography. His work was shortly followed in 1903 by the Wright Brothers' first
successful flight of a heavier-than-air aircraft. Another type of aerial platform was available.

AERIAL PHOTOGRAPHY IN WORLD WAR- I

At the beginning of World War I the military on both sides of the conflict saw the value of using
the airplane for reconnaissance work but did not fully appreciate the potential of aerial
photography. Initially, aerial observers, flying in two-seater airplanes with pilots, did aerial
reconnaissance by making sketch maps and verbally conveying conditions on the ground. They
reported on enemy positions, supplies, and movements; however, some observers tended to
exaggerate or misinterpret conditions. In some cases, their observations were based on looking at
the wrong army. From above, identifying one soldier from another was not easy. One time a
German observer indicated that an English unit was running around in great disarray and appeared
to be in a state of panic. The English were playing soccer.

Some English observers started using cameras to record enemy positions and found aerial
photography easier and more accurate than sketching and observing. The aerial observer became
the aerial photographer. Soon all of the nations involved in the conflict were using aerial
photography. The maps used by both sides in the Battle of Neuve-Chappelle in 1915 were
produced from aerial photographs. By the end of the war the Germans and the British were

46
GIS Reader

recording the entire front at least twice a day. Both countries possess up-to-date records of their
enemy's trench construction.

England estimated that its reconnaissance planes took one-half million photographs during the
war, and Germany calculated that if all of its aerial photographs were arranged side by side, they
would cover the country six times. The war brought major improvements in the quality of
cameras; photographs taken at 15,000 feet (4,572 mtrs) could be blown up to show footprints in
the mud.

PHOTOGRAPHY FROM 1920- 1939

By World War I the airplane had matured in its development to be used for aerial reconnaissance.
However, aerial photographs taken from planes were often highly distorted due to shutter speeds
being too slow in relationship to the speed of the plane. Toward the end of the war Sherman M.
Fairchild developed a camera with the shutter located inside the lens. This design significantly
reduced the distortion problem. In addition, the camera’s magazine would prevent uneven
spacing. Fairchild also designed an intervalometer that allowed photos to be taken at any interval.
Combined these developments made the Fairchild camera the best aerial camera system available.
With modifications, the Fairchild camera remained the desired aerial camera system for the next
fifty years.

In 1921, he took a series of 100 overlapping photographs and made an aerial map of Manhattan
Island.

This aerial map was his first real commercial success and it was used by several New York City
agencies and businesses. In 1922, Newyork, New Jersey contracted with him to map its bay area.
A Connecticut town discovered 1800 buildings not on its tax rolls using an aerial map, and

47
GIS Reader

another town, East Haven wanted to reassess its properties but discovered that to conduct a
ground survey would take five years and cost $80,000. The Canadian company, Laurentide Paper
and Pulp, hired him to survey the large, inaccessible forest regions of Canada. Within the first
year, 510 square miles were mapped. Fairchild was demonstrating that aerial photography had
many non-military uses and could be a successful venture commercially. By the mid-1930’s,
Fairchild Aerial Surveys was the largest and most commercially successful aerial photography
company in the United States.

Fairchild found it necessary to enter the field of manufacturing airplanes in order to have a good
solid aerial platform. The open-cockpit biplanes were totally unsatisfactory. He produced high-
wing cabin monoplanes. An enclosed, heated cabin protected the camera equipment as well as the
photographer and pilot from the weather elements. He now had three companies, one to produce
aerial cameras, another to conduct aerial surveys, and a final one to build planes suited to
undertake aerial photography. Fairchild’s brilliant camera designs and his strong commitment to
aerial photography brought aerial mapping to full maturity. Before his death in 1971, he saw his
cameras carried on Apollo 15, 16, and 17, and while astronauts explored the lunar surface, his
cameras mapped the moon.

In 1926, another platform was introduced for obtaining pictures of the Earth’s surface. In that
year Dr. Robert H. Goddard constructed and tested successfully the first rocket using liquid fuel.
The rocket was launched on March 16, 1926, at Auburn, Massachusetts. His second rocket was
also launched at Auburn in 1929 and it carried a scientific payload (a barometer and a camera).
The first picture from a rocket was taken during this launch.

Due to his lifetime of major accomplishments in the field of space technology, Goddard who died
in 1945 was honored in 1959 by receiving, posthumously, the Congressional Gold Medal. Also in
1959 in memory of his outstanding work, a major space science laboratory, NASA's Goddard
Space Flight Center, Greenbelt, Maryland, was established. Finally in 1959, Explorer Vl, under
Goddard project management, provided the World with its first image of Earth from space. In
1960, term “remote sensing” was coined.

In addition to Fairchild’s and Goddard’s accomplishments between World War I and


World War II, several other significant developments occurred within the field of remote
sensing during this period. These developments are outlined below-

• 1920's - First books on aerial photo interpretation were published.


• 1924 - Mannes and Godousky patent their research on developing multi-layer color film.
• 1920's-30's - Interest in the peaceful uses of aerial photography increased during this
period.
• 1934 - Twelve people met in Washington, D.C. and from this meeting The American
Society of Photogrammetry was founded.
• 1935 - Launched from the Stratobowl near Rapid City, South Dakota, the balloon,
Explorer II, carried Captains Albert Stevens and Orvil Anderson, and an assortment of
instruments to a world record altitude of 72,395 feet (22,066 kilometers).

48
GIS Reader

AERIAL PHOTOGRAPHY IN WORLD WAR II

World War II brought about tremendous growth and recognition to the field of aerial photography
that continues to this day. In 1938, the chief of the German General Staff, General Werner von
Fritsch, stated, “The nation with the best photoreconnaissance will win the war.” By 1940,
Germany led the world in photoreconnaissance. However, after von Fritsch’s death the quality of
German photointelligence declined. When the United States entered the War in 1941, it basically
had no experience in military photointerpretation. By the end of the War, it had the best
photointerpretation capacity of any nation in the world. In 1945, Admiral J. F. Turner,
Commander of American Amphibious Forces in the Pacific, stated that, “Photographic
reconnaissance has been our main source of intelligence in the Pacific. Its importance cannot be
overemphasized.”

1950’S

During the 1950’s, aerial photography continued to evolve from work started during World War
II and the Korean War. Color-infrared became important in identifying different vegetation types
and detecting diseased and damaged vegetation. Multispectral imagery, that is images taken at the
same time but in different portions of the electromagnetic spectrum, was being tested for different
applications. Radar technology moved along two paralleling paths, side-looking air-borne radar
(SLAR) and synthetic aperature radar (SAR). Westinghouse and Texas Instruments did most of
this work for the United States Air Force.

1957

Russia launches Sputnik, the first satellite, marking the beginning of satellite imagery

1970s

First of the Landsat satellites was launched by NASA in 1972. The Landsat program in the '70s
and '80s began selling satellite imagery commercially for the first time.

49
GIS Reader

10 Types, Geometry and Scale of Aerial Photography


Aerial Photography:-

Aerial photography is the art of taking photograph of any feature or phenomenon on the earth
surface from air-borne platforms with the help of a camera without coming in contact with that
particular object. Aerial photography most commonly used by military personnel may be divided
into two major types, the vertical and the oblique. Each type depends upon the attitude of the
camera with respect to the earth's surface when the photograph is taken.

Advantages of aerial photography:-

Aerial photographs have the advantage of providing us with synoptic views of large areas. This
characteristic also allows us to examine and interpret objects simultaneously on large areas and
determine their spatial relationships, which is not possible from the ground. Aerial photographs
are also cost effective in interpreting and managing natural resources. They have played a
significant role in map making and data analysis.

Classification of photographs:-

A number of systems have been used to classify photographs. The most common system is the
one that separates photographs into terrestrial and aerial (Figure 1).

Figure 1. Classification of Photographs (from Paine, 1981)

Vertical Aerial Photograph:-

A vertical photograph is taken with the camera pointed as straight down as possible (Figures 2).
Allowable tolerance is usually + 3° from the perpendicular (plumb) line to the camera axis. The
result is coincident with the camera axis. A vertical photograph has the following characteristics:

1. The lens axis is perpendicular to the surface of the earth.


2. It covers a relatively small area.
3. The shape of the ground area covered on a single vertical photo closely approximates a
square or rectangle.
4. Being a view from above, it gives an unfamiliar view of the ground.
5. Distance and directions may approach the accuracy of maps if taken over flat terrain.

- 50 -
GIS Reader

6. Relief is not readily apparent.

Figure 2. Relationship of the vertical aerial photograph with the ground.

Oblique Aerial Photograph:-

This type of aerial photography is taken when the axis of the camera is tilted with the subject of
the photography. It makes an angle with the feature. Depending on the angle it can be divided into
following groups:

Low Oblique. This is a photograph taken with the camera inclined about 30° from the vertical
(Figure 3). It is used to study an area before an attack, to substitute for a reconnaissance, to
substitute for a map, or to supplement a map. A low oblique has the following characteristics:

1. It covers a relatively small area.


2. The ground area covered is a trapezoid, although the photo is square or rectangular.
3. The objects have a more familiar view, comparable to viewing from the top of a high
hill or tall building.
4. No scale is applicable to the entire photograph, and distance cannot be measured.
Parallel lines on the ground are not parallel on this photograph; therefore, direction
(azimuth) cannot be measured.
5. Relief is discernible but distorted.
6. It does not show the horizon

Figure 3. Relationship of low oblique photograph to the ground.

- 51 -
GIS Reader

High Oblique. The high oblique is a photograph taken with the camera inclined about
60° from the vertical (Figures 4). It has a limited military application; it is used primarily
in the making of aeronautical charts. However, it may be the only photography available.
A high oblique has the following characteristics:

1. It covers a very large area (not all usable).


2. The ground area covered is a trapezoid, but the photograph is square or
rectangular.
3. The view varies from the very familiar to unfamiliar, depending on the
height at which the photograph is taken.
4. Distances and directions are not measured on this photograph for the
same reasons that they are not measured on the low oblique.
5. Relief may be quite discernible but distorted as in any oblique view. The
relief is not apparent in a high altitude, high oblique.
6. The horizon is always visible.

Figure 4. Relationship of high oblique photograph to the ground.

Advantages of vertical over oblique aerial photographs:-

1. Vertical photographs present approximately uniform scale throughout the photo but not
oblique photos. It follows that making measurements (e.g., distances and directions) on
vertical photographs is easier and more accurate.
2. Because of a constant scale throughout a vertical photograph, the determination of
directions (i.e., bearing or azimuth) can be performed in the same manner as a map. This
is not true for an oblique photo because of the distortions.
3. Because of a constant scale, vertical photographs are easier to interpret than oblique
photographs.
4. Vertical photographs are simple to use photogrammetrically as a minimum of
mathematical correction is required.
5. To some extent and under certain conditions (e.g., flat terrain), a vertical aerial
photograph may be used as a map if a coordinate grid system and legend information are
added.
6. Stereoscopic study is also more effective on vertical than on oblique photographs.

6.1.2. Advantages of oblique over vertical aerial photographs:-

- 52 -
GIS Reader

1. An oblique photograph covers much more ground area than a vertical photo taken from
the same altitude and with the same focal length.
2. If an area is frequently covered by cloud layer, it may be too low and/or impossible to
take vertical photographs, but there may be enough clearance for oblique coverage.
3. Oblique photos have a more natural view because we are accustomed to seeing the
ground features obliquely. For example, tall objects such as bridges, buildings, towers,
trees, etc. will be more recognizable because the silhouettes of these objects are visible.
4. Objects that are under trees or under other tall objects may not be visible on vertical
photos if they are viewed from above. Also some objects, such as ridges, cliffs, caves,
etc., may not show on a vertical photograph if they are directly beneath the camera.
5. Determination of feature elevations is more accurate using oblique photograph than
vertical aerial photographs.
6. Because oblique aerial photos are not used for photogrammetric and precision purposes,
they may use inexpensive cameras.

Depending on the object where the camera is mounted aerial photography can be divided
into:-

• Balloon Aerial Photography: In this case the camera is mounted on the balloon. This is
the earliest form of aerial photography used in 1958.
• Kite Aerial Photography: In this case the camera is mounted on a kite.
• Mast Aerial Photography: The mast is used as the main object on which the camera is
mounted. It is fixed on a vehicle which takes the mast to the desired places on the
instruction of the photographer.

Depending on the type of camera used Aerial Photography can be divided into:-

• Multiple Lens Photography:- These are composite photographs taken with one camera
having two or more lenses, or by two or more cameras. The photographs are
combinations of two, four, or eight obliques around a vertical. The obliques are rectified
to permit assembly as verticals on a common plane.
• Convergent Photography:- These are done with a single twin-lens, wide-angle camera,
or with two single-lens, wide-angle cameras coupled rigidly in the same mount so that
each camera axis converges when intentionally tilted a prescribed amount (usually 15 or
20°) from the vertical.
• Panoramic:-The development and increasing use of panoramic photography in aerial
reconnaissance has resulted from the need to cover in greater detail more and more areas
of the world. A panoramic camera is a scanning type of camera that sweeps the terrain of
interest from side to side across the direction of flight. This permits the panoramic camera
to record a much wider area of ground than either frame or strip cameras.

Geometry of Aerial Photography:

Three terms need defining here, they are Principal Point, Nadir and Isocenter. They are defined
as follows:

1. Principal Point - The principal point is the point where the perpendicular projected through
the center of the lens intersects the photo image.

2. Nadir - The Nadir is the point vertically beneath the camera center at the time of exposure.

- 53 -
GIS Reader

3. Isocenter - The point on the photo that falls on a line half- way between the principal point
and the Nadir point.

These points are important because certain types of displacement and distortion radiate from
them. It is the Isocenter of the aerial photo from which tilt displacement radiates. It is Nadir from
which topographic displacement radiates.

II. Perspective and Projection :

Aerial photographs are created using a central or perspective projection. Therefore, the relative
position and geometry of the objects depicted depends upon the location from which the photo
was taken. Now because of this we get certain forms of distortion and displacement in Air
Photos.

III. Distortion and Displacement

Distortion - Shift in the location of an object that changes the perspective characteristics of the
photo.

Types of distortion include:

1. Film and Print Shrinkage;


2. Atmospheric refraction of light rays;
3. Image motion; and,
4. Lens distortion.

Displacement - shift in the location of an object in a photo that does not change the perspective
characteristics of the photo (The fiducial distance between an object's image and it's true plan
position which is caused by change in elevation.)

Types of displacement include:

1. Curvature of the Earth;


2. Tilt; and,
3. Topographic or relief (including object height).

Both distortion and displacement cause changes in the apparent location of objects in photos. The
distinction between the types of effects caused lies in the nature of the changes in the photos.
These types of phenomena are most evident in terrain with high local relief or significant vertical
features.

Three main types of problems/effects caused by specific types of distortion and displacement are:

• Lens distortion - Small effects due to the flaws in the optical components (i.e. lens) of
camera systems leading to distortions (which are typically more serious at the edges of
photos).
• Tilt Displacement - This type of displacement typically occurs along the axis of the wings
or the flight line. Tilt displacement radiates from the isocenter of the photo and causes
objects to be displaced radially towards the isocenter on the upper side of the tilted photo
and radially outward on the lower side.
- 54 -
GIS Reader

• Topographic Displacement - This is typically the most serious type of displacement. This
displacement radiates outward from Nadir. Topographic displacement is caused by the
perspective geometry of the camera and the terrain at varying elevations.

Overlap and Sidelap:-

In order to attain perfection and to capture the entire area, some parts are taken repeatedly by the
camera. This repeatation creates overlapping photographs which is 60%-70%. When two flights
move side by side taking picture of same area or when the same flight moves in the same area,
sidelap occurs in the photograph which is 30%-40%.

Scale of Aerial Photograph:-

Scale is one of the most important information for the usage of an aerial photograph or a map.
Quantitative measurements and interpretation of features on a photograph are highly dependent
upon this information. Scale is what determines the relationship between the objects imaged on a
photograph and their correspondings in the real world (i.e., the ground). The scale of a
photograph is defined as the ratio of the distance measured between any two points on the
photograph (or a map) to the distance between the same two points on the ground.

Figure 7.1. Relationship of photographic dimensions to their corresponding


ground dimensions.

Representative fraction (or ratio) is the fraction of a distance measured between two points on a
photograph to the distance measured between the same two points on the ground. It can be
expressed as 1/20000 or as 1:20000.

Unit equivalents, also called equivalent scale, expresses the equivalence of a distance measured
between two points in photographic units to the distance between the same two points in ground
units. For example, a PS of 1:20000 would be expressed as 1 mm = 20 m (or 1 cm = 200 m or 1
inch = 258 ft), meaning that a distance of 1 mm on a photograph is equivalent to 20 m on the
ground (or 1 cm is equivalent to 200 m on the ground).

Photo scale reciprocal (PSR) is simply the inverse of the representative fraction. For example, an
RF of 1:20000 would correspond to a PSR of 20000

Types of Scale:-

- 55 -
GIS Reader

Point scale: It is the scale at a point with a specific elevation on the ground. This suggests that
every point on a vertical photograph at a different elevation will have a different scale.

where:

PSP is the photo scale at point P,


f is the focal length of the camera used to take the photograph, the distance between the lens and
the focal plane,
H is the flying height of the aircraft above MSL, the distance between MSL and the lens,
hP is the elevation of point P above MSL, and
H-hP = HP is the flying height of the aircraft above point P.

Figure 7.4. Determination of point scale on a vertical aerial


photograph.

Average Scale:Unlike point scale, which is specific to a single point on the ground, average scale
may be determined for the entire project area, a set of photographs, a single photograph, a portion
of a photograph, or between two points on a photograph

where:

PSav is the average scale of the area considered (project, set of photographs, etc.),
hav is the average elevation of the area, and
H-hav = Hav is the flying height of the aircraft above the average elevation of the area.

- 56 -
GIS Reader

Figure 7.5. Average scale as compared to point scale.


Scale is affected by following factors:-

• Focal length
• Topography
• Tilt
• Flying height

- 57 -
GIS Reader

11 AREAL PHTOGRAMMETRY
Aerial photogrammetry: Image parallax, Parallax measurement and Relief displacement:-

Image Parallax: The term Parallax is refer to the apparent change in the relative positions of
stationary objects caused by a change in viewing position. Simply put, it is the shift of an object
against a background caused by a change in observer position. If there is no parallax between two
objects then they are side by side at the exact same height.

Figure 1: Apparent motion of an object

This parallax is often thought of as the 'apparent motion' of an object against a distant background
because of a perspective shift, as seen in Figure 1. When viewed from Viewpoint A, the object
appears to be closer to the blue square. When the viewpoint is changed to Viewpoint B, the object
appears to have moved in front of the red square.

FIG. 2A. Projected stereoscopic image FIG. 2B. Projected stereoscopic


image points with zero parallax. Image points with parallax at the
Maximum for the in screen value
Before divergence

58
GIS Reader

FIG. 2C. Projected stereoscopic image FIG. 2D. Projected stereoscopic


image points with crossed or off screen parallax points with divergent
parallax.

The figure 2 shows images made up of points with various parallax values. The lines of sight of
the eyes, correspond to the optical axes of their lenses, and their distance apart is called the
interpupillary separation.
In FIG. 2A the left and right image points are shown to correspond. By definition, this
condition is known as “zero parallax,” and such a point will appear at the plane of the screen. As
shown, the eyes are inwardly converged to fuse the superimposed corresponding (homologous)
left and right points.
With regard to FIG. 2B, note that the homologous points are separated by a distance given
by the arrowed line whose length is the same as the interpupillary separation. In other words, the
parallax value of these points is equal to the interpupillary separation. In such a case, the lines of
sight of the left and right eyes are parallel.
FIG. 2D is similar to FIG. 2B, except that the homologous points are separated by a
distance that is greater than the interpupillary separation. The lines of sight diverge, and this case
is known as divergence.

Parallax Measurement:-

Precise parallax measurements of distance usually have an associated error. Thus a


parallax may be described as some angle ± some angle-error. However this "± angle-error" will
not translate directly into a ± error for the range, except for relatively small errors. The reason for
this is that an error toward a smaller angle results in a greater error in distance than an error
toward a larger angle.

However an approximation of the distance error can be computed by means of the


following:

Where d is the distance and p is the parallax. The approximation is far more accurate for
relatively small values of the parallax error when compared to the parallax.

59
GIS Reader

Relief Displacement:-

Because an aerial photograph is a central projection, all elevation and depression will
have their images displaced from their original position on the ground except the objects at the
nadir point (n) or principal point (pp) in vertical aerial photographs. Relief displacement is the
position of a point on the photograph if it were on the reference plane and its actual position due
to relief.
Relief displacement Δr is proportion to distance from the nadir point and with ratio height
difference Δz over flying height Zm. In a tilted photograph relief displacement is radial from the
nadir point.

Relief (Topographic)
Displacement

Planimetric corrected position

Relief displacement effect

60
GIS Reader

Map (orthographic projection) Photo (perspective projection)


- Constant scale - Varied scale
- No relief displacement - Relief displacement
Due to relief displacement, difference in size, shape and location of the trees
Relief Displacement Relationships

h/H = d/r = D/R


d= h x r/H
h = d x H/r

• No relief displacement at nadir


if r = 0, so is d
• Displacement varies directly with height
a 1000 ft mountain will be displaced twice as far as a 500 ft mountain
• Displacement varies directly with radial distance from nadir to the object
an elevation 4” from nadir would have twice as much displacement as same elevation 2”
from nadir
• Objects above the elevation of nadir are displaced away from nadir
• Objects below nadir elevation are displaced toward nadir
• Relief displacement is radial from nadir
• Displacement varies inversely with flying height
Little topographic displacement on photographs taken from high altitude
(e.g., satellite)

61
GIS Reader

12 Equipments Used for Aerial Photo Interpretation


Aerial photo interpretation:-

The general purposes of aerial photo interpretation are viewing photographs, making
measurements on photographs, and transferring interpreted information to base maps or digital
data bases. The aerial photo interpretation involves the stereoscopic viewing to provide a three
dimensional view of the terrain. This is possible because of the binocular vision of the human
eyes.

Equipments used for viewing aerial photo:-

Well improved automatic instruments like radial line plotters A-7, A-8 can be used for the
interpretation and preparation of maps. But these equipments are not within the reach of the
individuals or general laboratories and moreover the techniques are also very difficult. So
generally use equipments are stereoscopes and sketch masters since they are less expensive and
simple.

Stereoscopes facilitate stereo viewing process. People having weak eyesight in one eye may not
have the ability to see in stereo. People with monocular vision can become proficient photo
interpreters. Several types of stereoscopes like lens stereoscopes, mirror stereoscopes, zoom
stereoscopes are available.

Lens stereoscope

A lens stereoscope comprises of two lenses placed at the same plane which are generally attached
to two rectangular metallic frames. It is portable and comparatively inexpensive. In this, it is
assumed that the distance between the two eyes of a man is approximately 65mm. With the help
of two legs; we can place it on the table. Then the lenses will be 100mm above the plane of the
table. The photo graphs will be magnified two and half times.

Below the instrument, two three dimensional photos are placed and the distance between the two
is so adjusted that both the images of the same point are visible at one point. Then an imaginary
three dimensional model of the visible landscape is found. Thus we can visualize the real picture
of the same part of the land on a small scale.

62
GIS Reader

The figures given below can be used to test stereoscopic vision. When this diagram is viewed
through a stereoscope, the rings and other objects should appear to be at varying distances from
the observer.

The principal disadvantage of small lens stereoscopes is the photographs must be quite close
together. So the interpreter can not view the entire stereoscopic area of 240mm aerial photographs
without raising the edge of one of the photographs.

Mirror stereoscopes have a combination of prisms and mirrors. So it helps to separate the lines
of sight from each of the viewer’s eyes. It uses little or no magnification. So the interpreter can
view all or most of the stereoscopic portion of a 240mm stereopair without moving either the
photographs or the stereoscope. The principal disadvantage of mirror stereoscope is that it is too
large. So it is not portable and it is more costly than simple lens stereoscopes.

Mirror stereoscopes

Scanning mirror stereoscopes are the improved form of mirror stereoscopes. It has two
binoculars attached with it. It can be used with 1.5 or 4.5 power magnification. It has a built-in
provision for moving the field of view across the entire stereo overlap area of the photographs or
the stereoscope. It facilitates two persons to view the same aerial photographs simultaneously.

63
GIS Reader

Scanning mirror stereoscope

Zoom stereoscopes has a continuously variable magnification of 2.5 to 10 power. They are
expensive precision instruments, typically with a very high resolution. The image in each
eyepiece can be optically rotated through 360’ to accommodate uncut rolls of film taken under
conditions of aircraft.

Zoom stereoscope

Either paper prints or film transparencies can be viewed using a stereoscope. Paper prints are
more convenient to handle, more easily annotated, and better suited to field use. An interpreter
would generally use a simple lens or mirror stereoscope with paper units.

A more elaborate viewer like zoom stereoscope can be used with colour and colour infrared film
transparencies.
Transparencies are placed
on a light table for viewing
since the light source must
come from behind the
transparency.

64
GIS Reader

Light table and zoom stereoscope

Equipments used for measuring aerial photo:-

The task of taking distance measurements from aerial photographs can be performed using many
measurement devices. They differ in their cost, accuracy and availability.

Parallax bar or Stereo micro meter is used with the help of mirror stereoscope. It is metallic
micro scale constructed on a bar with graduated scale. A graduated screw is fixed at the end. Its
circulation rotation is related to the graduated scale on the bar in fixed ratio. One transparent glass
plate is fixed at each end. At the bottom of it, floating marks are there. If the difference of height
between two points is to be determined, then the first attempt is to fuse the double three
dimensional image into one by properly setting the instrument. So the three dimensional image
can be visible.

An engineer’s scale or metric scale is often adequate. In addition to measuring distances, areas
are often measured on a photograph. Accurate area measurements can be made from maps
generated from airphotos in stereo plotters or orthophotoscopes.

As the interpreter traces around the boundary of an area in clockwise direction, polar plani meter
mechanically computes areas. Areas can be determined most rapidly and accurately using
electronic coordinate digitizer. Areas can be determined using a digitizing tablet interfaced with
a microcomputer.

Equipments used for transferring interpreted information:-

After interpretation, the data can be transferred to a base map. When the scale of the base map and
photograph are not of same scale, special optical devices can be used for the transfer process. By
adjusting the magnification of the two views, photo can be matched to the scale of the map.

Bausch and Lomb Zoom Transfer Scope allow the operator to view both a map and a pair of
stereo photographs. This can accommodate a wide disparity of photo and map scales. The colour
additive viewer is also photo interpretation equipment. This device color codes and super imposes
three mutispectral photographs to generate a more interpretable color composite. Most color
additive viewers are monoscopic. A few are equipped for stereoscopic viewing.

The sketch master is used for mapping and in delineation of landscapic feature on available
topographic maps. It contains a metallic stand with graduated scale fixed to any geometrically
shaped metallic piece. A metal piece is attached to the stand by an adjustable screw. A
horizontally fixed arm is there on this metal piece. So it can be moved forward and backward.

A metal piece is attached on the other side. The photographs can be fixed on this with the
help of magnetic metallic weights. This is adjustable. At the end of the horizontal arm,
another horizontal bar is attached and a double prism is attached to this. Thus the viewer
can see the images of the air photo and the map or sketch placed below the prisms. So it is
possible to construct the map with the help of adjustment of control points.

65
GIS Reader

13 Digital image processing, sources of error, radiometric


and geometric corrections
Introduction
Remote sensing data can be analyzed using visual image interpretation techniques if
the data are in the hardcopy or pictorial form. Visual image interpretation techniques have
certain disadvantages and may require extensive training and are labour intensive. If the data are
in digital mode, the remote sensing data can be analyzed using digital image processing
techniques and such a database can be used in raster GIS.

Basic character of digital image


It is actually composed of two dimensional arrays of discrete picture elements or pixels.
(a) Original 200x200 digital image,
(b) Enlargement showing 20 x 20 of pixels,
(c) 10 x 10 enlargement,
(d) Digital numbers corresponding to radiance of each pixel.
In numerical formats, the image data can be readily analyzed with the aid of a computer.
This is called radiometric resolution of remote sensing data.
A digital image is defined as a matrix of digital numbers (DNs). Each digital number is
the output of process of analog – to – digital conversion. The surface of the ground divided into a
number of parcels. Each parcel of land can be represented as a pixel on the image and each pixel
is occupied by a digital number and is called pixel value. This pixel value or digital number
shows the radiometric resolution of remote sensing data. Visual and numerical techniques are
complimentary in nature, and consideration must be given to approach (or combination of
approaches) that best fits a particular application.

Satellite remote sensing data in general and digital data in particular have been used as basic
inputs for the inventory and mapping of natural resources of the earth surface like agriculture,
soils, forestry, and geology. The central idea behind digital image processing is that, the digital
image is fed into a computer, one pixel at a time, called Look-Up-Table (LUT) values for a new
image. Virtually, all the procedures may be grouped into one or more of the following broad types
of operations –
(1) Pre- Processing
(2) Image Registration
(3) Image enhancement
(4) Image filtering
(5) Image transforms
(6) Image classification

Preprocessing
This correction method involves the data radio metrically and to eliminate the noise present
in the data. All pre-processing methods are considered under three heads, namely,
(1) Geometric correction methods,
(2) Radiometric correction methods,
(3) Atmospheric correction methods.

Geometric correction methods


The transformation of a remotely sensed image into a map waith a scale and projection
properties is called geometric correction. This can be used in one of the following
circumstances –
ƒ To transform an image to match a map projection,

66
GIS Reader

ƒ To locate points of interest on map and image,


ƒ To bring adjacent image into registration,
ƒ To overlay images and maps within GIS, and
ƒ To integrate remote sensing data with GIS.

To correct sensor data, both internal and external errors must be determined and be either
predictable or measurable, internal errors due to sensor effects, being systematic or
stationary, or, constant for all practical purposes.

Radiometric correction methods


The primary function of remote sensing data quality evaluation is to monitor the
performances of the sensors. The performances of the sensors are continuously monitored by
applying radiometric correction models on digital image data sets. The radiance measured by
any given system over a given object is influenced by factors, such as, changes in scene
illumination, atmospheric condition, viewing geometry and instrument response
characteristics. One of the most important radiometric data processing activity involved in
many quantitative application of digital image data is conversion of digital numbers to
absolute physical values, namely, Radiances & Reflectance.

Computation of radiance (L)


Radiances are a measure of the radiant energy given out by an object and picked up by
remote sensing. Spectral radiance is defined as the energy within a wavelength band radiated
by a unit area per unit solid angle of measurement.
Radiance (L)= (Dn / Dmax) (Lmax- Lmin) + Lmin
Dn= digital value of a pixel from the Computer-Compitable-Tape (CCT)
Dmax= maximum digital number recorded on the CCT
Lmax= maximum radiance measured at detector saturation in mW cm-2 s-1
Lmin= minimum radiance measured at detector saturation in mW cm-2 s-1
Computation of Reflectance
It is an energy ratio, a function of radiance and is defined by the following formula:
Refiectance= Radiance/ E sinX
E= irradiance in mW cm-2 at the top of atmosphere, and
X= solar elevation angle available in the header file of CCT.

Cosmetic operations
The first is the correction of digital images containing either partially or entirely missing
such lines. The second is the correction of images because of restriping of the imagery. This
means sometimes detector recorded irradiance for the same object may differ & second
phenomenon is called line drop.

Random noise removal


Image noise is any unwanted disturbance in image data that due to limitations in the
sensing and data recording process. The random noise problems in digital data are
characterized by nonsystematic variation in gray levels from pixel to pixel called bit errors.
Such a noise is often referred to as being “Spiky” in character and it causes images to have a
“Salt and Paper” or snowy appearance.

Atmospheric correction methods


The effect of scattering is inversely proportional to the fourth power of wavelength of
energy, that is , scattering is more in the lower wavelength (visible) than in the higher
wavelength (infrared band). Further scattering effect increases the signal value (bias). In realty,
because of the presence of haze, fog, or atmospheric scattering, there always exists some kind
of unwanted signal value called bias.

67
GIS Reader

Image registration
Image registration is the translation and rotation alignment process by which two images/
maps of like geometrics and of like geometries and of the objects are positioned coincident
with respect to one another so that corresponding elements of the same ground area appear in
the same place on the registered images. Rectification is the process by which the geometry of
an image area is made planimetric. Whenever accurate data, direction, and distance
measurements are required, geometric rectification is required.
Image enhancement
The aim of digital enhancement is to amplify these slight differences for better clarity of the
image scene. This means digital enhancement increases the separability (contrast) between the
interested classes or features. The digital image enhancement may be defined as some
mathematical operation that are to be applied to digital remote sensing input data to improve
the visual appearance of an image for better interpretability or subsequent digital analysis. The
common problems that can be remove by image enhancement-
(1) Low sensitivity of detectors,
(2) Weak signal of objects present on earth surface,
(3) Similar reflection of different objects,
(4) Environment condition at the time of recording, and
(5) Human eye is poor at discriminating slight radiometric &spectral differences.

Image filtering
A characteristics of remotely sensed is a parameter called spatial frequency, defined
as the number of changes in brightness values per unit distance for any particular part of image. If
the brightness values changes dramatically over very short distances, this called high frequency
area. Algorithms which perform image enhancement are called “Filters” because they suppress
certain frequencies and pass (emphasis) others. Filters that pass high frequencies while
emphasizing final detail and edges called high frequency filters, and filters that pass low
frequencies called low frequency filters.

Image transformation
All the transformation of image processing of remotely sensed data allow the
generation of a new image, based on the arithmetic operations, mathematical statistics and Fourier
transformations.

68
GIS Reader

14 Geometric correction methods, radiometric correction


methods, random noise removal

Introduction
Remotely sensed raw data, received from imaging sensor mounted on satellite platforms
generally contain flaws and deficiencies. The correction of deficiences and removal of flaws
present in the data through some methods are termed as pre-processing methods. This correction
model involves the initial processing of raw image data to correct geometric distortions, to
calibrate the data radiometrically and to eliminate the noise present correction methods.

1.Geometric Correction Methods


Geometric distortion
Remotely sensed images are not maps. Frequently information extracted from remotely sensed
images is integrated with map data in a geographical information system.information system.
Geometric distortion is an error on an image, between the actual image coordinates and the ideal
image coordinates which would be projected theoretically with an ideal sensor and under ideal
conditions.Geometric distortions are classified into

• Internal distortion
• External distortions

Internal distortion resulting from the geometry of the sensor.

Externaldistortions resulting from the attitude of the sensor or the shape of the object.

Geometric correction is undertaken to avoid geometric distortions from a distorted image,


and is achieved by establishing the relationship between the image coordinate system and the
geographic coordinate system using calibration data of the sensor, measured data of position and

69
GIS Reader

attitude, ground control points, atmospheric condition etc.The transformation of a remotely sensed
image into a map with a with a scale and projection properties is called geometric correction.
Geometric Correction of remotely sensed images is required when the image r product
derived from the image such as a vegetation index or a classified image is to be used in one of the
folowing circumstances
• to transform an image to match a map projection
• to locate points of interest on map and image
• to bring adjacent images into registration
• to overlay temporal sequences of images of the same area perhaps acquired by different
sensors
• to overlay images and maps witnin GIS
• to integrate remote sensing data with GIS.

To correct sensor data, both internal and external errors must be determined and be either
predictable or measurable. Internal errors are due to sensor effects, being systematic or stationary,
or, constant for all practical purposes. External errors are due to platform perturbations and scene
characteristics, which are variable in nature and can be determined from ground control and
tracking data.

Sources of effects of geometric errors of Image

S.No. Effect Source of error

1 Platform altitude, attitude, scan - skew minor scan velocity


2 Scene effect earth rotation, map projection
3 Sensor effect Mirror sweep
4 Scene and sensor effect panorama, perspective

Geometric correction
The steps to follow for geometric correction are as follows

1) Selection of method
After consideration of the characteristics of the geometric distortion as well as the available
reference data, a proper method should be selected.

2) Determination of parameters
Unknown parameters which define the mathematical equation between the image coordinate

70
GIS Reader

system and the geographic coordinate system should be determined with calibration data and/or
ground control points.

3) Accuracy check
Accuracy of the geometric correction should be checked and verified. If the accuracy does not
meet the criteria, the method or the data used should be checked and corrected in order to avoid
the errors.

4) Interpolation and resampling


Geo-coded image should be produced by the technique of resampling and interpolation. There are
three methods of geometric correction as mentioned below.

a. Systematic correction
When the geometric reference data or the geometry of sensor are given or measured, the
geometric distortion can be theoretically or systematically avoided. For example, the geometry of
a lens camera is given by the collinearity equation with calibrated focal length, parameters of lens
distortions, coordinates of fiducial marks etc. The tangent correction for an optical mechanical
scanner is a type of system correction. Generally systematic correction is sufficient to remove all
errors.
b. Non-systematic correction
Polynomials to transform from a geographic coordinate system to an image coordinate system, or
vice versa, will be determined with given coordinates of ground control points using the least
square method. The accuracy depends on the order of the polynomials, and the number and
distribution of ground control points
c. Combined method
Firstly the systematic correction is applied, then the residual errors will be reduced using lower
order polynomials. Usually the goal of geometric correction is to obtain an error within plus or
minus one pixel of its true position

2 .Radiometric Correction Methods


Radiometric correction is to avoid radiometric errors or distortions.When the emitted or reflected
electro-magnetic energy is observed by a sensor on board an aircraft or spacecraft, the observed
energy does not coincide with the energy emitted or reflected from the same object observed from
a short distance. This is due to the sun's azimuth and elevation, atmospheric conditions such as
fog or aerosols, sensor's response etc. which influence the observed energy. Therefore, in order to
obtain the real irradiance or reflectance, those radiometric distortions must be corrected
Radiometric correction is classified into the following three types

(1) Radiometric
correction of
effects due to
sensor
sensitivity
In the case of optical sensors, with the use of a lens, a fringe area in the corners will be darker as
compared with the central area. This is called vignetting. Vignetting can be expressed by cos ,
where is the angle of a ray with respect to the optical axis. n is dependent on the lens
characteristics, though n is usually taken as 4. In the case of electro-optical sensors, measured
calibration data between irradiance and the sensor output signal, can be used for radiometric
correction.

71
GIS Reader

(2) Radiometric correction for sun angle and topography

a. Sun spot
The solar radiation will be reflected diffusely onto the ground surface, which results in lighter
areas in an image. It is called a sun spot. The sun spot together with vignetting effects can be
corrected by estimating a shading curve which is determined by Fourier analysis to extract a low
frequency component.

b.Shading
The shading effect due to topographic relief can be corrected using the angle between the solar
radiation direction and the normal vector to the ground surface.

(3) Atmospheric correction


Various atmospheric effects cause absorption and scattering of the solar radiation. Reflected or
emitted radiation from an object and path radiance (atmospheric scattering) should be corrected
for

3.Random Noise Removal

Image noise is any unwanted disturbance in image data that is due to limitations in the
sensing and data recording process. The random noise problems in digital data are charectorised
by nonsystematic variations in gray levels from pixel to pixel called bit errors. Such a noise is
often referred to as being 'spiky' in character and it causes images to have a 'salt and pepper' or
snowy appearances Bit errors are handled by recognizing that noise values normally change much
more abruptly than true image values. Thus, noise can be identified by comparing each pixel in an
image with its neighbours. If the difference between a given pixel value and its surrounding
values exceeds an analyst specified threshold the pixel is assumed to contain noise. The noisy
pixel value can then be replaced by the average of its neighbouring values. Moving window 3 x 3
or 5 x 5 pixel are typically used in such procedures. moving window concept basically involves

a) projection of 3 x 3 pixel window in image being processed; (a) movement of window from line
to line.

 
 
 

72
GIS Reader

15 Image Enhancement Techniques


Image Enhancement Techniques

The radiance (reflected/emitted) of features on the ground which is converted into digital images
gets degraded due to low sensitivity of the detectors, weak signal of the objects present on the
earth surface, similar reflectance of different objects and environmental conditions at the time of
recording. This creates a low contrast image whose features cannot be easily characterized by the
human eye.
Image enhancement techniques are used to manipulate the visual appearance of a digital image
for better interpretation by improving the information content of the image in the following ways.

• Contrast Enhancement
• Intensity, Hue and saturation transformations
• Density Slicing
• Edge Enhancement
• Making Digital Mosaics
• Producing synthetic stereo images

The enhancement techniques depend upon two factors mainly

• The digital data (i.e. with spectral bands and resolution)


• The objectives of interpretation

Digital image enhancement can be done by improving the remote sensing input data of the image
using various mathematical operators. These techniques can be broadly classified into two

• Point operators
• Local operators

Point operations modify the values of each pixel in an image data set independently, whereas
local operations modify the values of each pixel in the context of the pixel values surrounding it.
Contrast Enhancement is an example of point operations and spatial filtering for local operations.

Contrast Enhancement

A remote sensing system, i.e., sensors mounted on board aircraft and satellites, should be capable
of imaging a wide range of scenes, from very low radiance (oceans, low solar elevation angles,
high altitudes) to very high radiance (snow, ice, sand, low altitudes). For any particular area that is
imaged, the sensor’s range must be set to accommodate a large range of scene radiance and have
as many bits/pixels as possible over this range for precise measurements. However, the full range,
which is typically eight bits/pixel or more, is not used up. When such a scene is imaged and
converted to DNs (Digital Number) and displayed on a black and white monitor which uses eight
bit/pixels in each color, it will appear dull and lacking in contrast because it is not using the full
range available in the display.

For example, In Fig.2, the histogram of image A shows the number of pixels that respond to each
DN. The central 92% of the histogram has a range of DNs from 49 to 106, which utilizes only
23% of the available brightness range. This limited range of brightness values accounts for the
low contrast ratio of the original image.

76
GIS Reader

The aim of contrast enhancement is to expand the range of the original DN data to fill the
available display GL (Grey Level) range and thereby enhance the contrast of the digital image.
This transformation is called a contrast stretch.

Linear Enhancement

Linear contrast stretch is one of the simplest enhancement techniques which are used to improve
the contrast of the image. This technique is used to expand the image DN range to full range of
the display device (0-255), which is the range of values that can be represented in an 8 bit display
device). This procedure is also called min-max stretch (graph 1).

A DN (Digital Number) value in the low end of the original histogram is assigned to extreme
black, and a value at the high end is assigned to extreme white. In this example, (Fig 2) the lower
4% of pixels (DN<49) are assigned to black, or DN=0, and the upper 4% (DN>106) are assigned
to white, or DN=255. The map of Fig 1 shows the different features for comparison. The
intermediate values are interpolated between 0 and 255 by following a linear relationship, as
given below

Y = a + bx

Where X and Y are the input gray value of any pixel and output gray value of the same pixel,
“a” and “b” are intercept and slope respectively.

Graph 1: Minimum-maximum linear contrast enhancement.


There is a loss of contrast at extreme high and low tail of the histogram for pixels with a DN
range smaller than the min-max range. In the northeast portion of the original image (Fig.2A), the
lower limits of snow caps on volcanoes are clearly defined. On the stretched image (Fig.2B), the
white tone includes both the snow and the alluvium lower on the flanks of the mountain. In the
small dry lake north of the border, patterns that are visible on the original image are absent on the
stretched image. Brightness differences within the dry lake and between the snow and alluvium
were in the range of DNs greater than 106. On the stretched image, all these DNs are now white,
as shown on the histogram (Fig.2B) by the spike at a DN of 255.

To increase the contrast, a saturation stretch may be implemented with the linear stretch to these
pixels. Pixels with a DN outside the range are transformed to a GL of either 0 or 255. Typically
saturation (clipping) of 1%-2% pixels of the image is a safe level where there is no loss of image
structure due to saturation.

Linear transformation can also be used to decrease image contrast if the image DN range exceeds
that of the display. This situation occurs for radar imagery, some multi spectral image such as
AVHRR (10 bits/pixel), and most hyper spectral sensors (12 bits/pixel).

77
GIS Reader

Fig. 1: Location map for Landsat image of an area in the Northern Chile Bolivia

A. Original Image with no contrast enhancement

78
GIS Reader

B. Linear contrast Stretch with lower and upper four percent of pixels saturated to black
and white respectively.

C. Gaussian Stretch

Figure 2: Portion of Landsat MSS band-4 image of an area in the Northern Andes, Chile
and Bolivia.

Non-Linear Enhancement

Non-linear Contrast stretch is used when the image histogram is asymmetric and DN values
cannot be controlled by a simple linear transformation. This method is used to expand one portion
of the grey scale while compressing the other portion (graph 2). While spatial information is

79
GIS Reader

preserved, quantitative radiometric information can be lost. Examples of non linear stretch include
logarithmic stretch, exponential stretch, histogram equalization etc.

A - Original Histogram
B- Nonlinear Enhancement
X - Brightness levels
Y - Image area (pixels)

Graph 2: Non – Linear Enhancement

Non-Linear Logarithmic Enhancement

Non-linear logarithmic contrast enhancement is used to emphasize details in the darker regions of
the image by compressing the brightness values within an image. Here the output pixel grey
values (Yij) will be generated from input pixel grey values (Xij) following the logarithmic
expressions as follows.

Yij = a log (Xij) + b

Where “a” and “b” are determined by taking the maximum and minimum grey values of the input
image and the corresponding maximum and minimum values in the output image. The following
are the characteristics of logarithmic enhancement.
a) It makes low contrast more visible by enhancing low contrast edges.
b) It provides contrast signal to noise ratio.
C) It provides a more equal distribution of grey values.
d) It transforms multiplicative noise into additive noise.

Exponential Contrast Enhancement

Exponential contrast enhancement is used on the edges in an image is to compress low contrast
edges, while expanding high contrast edges. This highlights features having higher grey values
there by enhancing high range bright areas in an image. This technique produces less visible detail
than the original image and is of limited use for image enhancement. The grey values (Xij) in the
input image transforms to grey values (Yij) in the output image as follows.

Yij = a e (bXij) + c

Where a, b and c are constants, b is arbitrarily chosen between 0.01 and 0.1 to higher value of e.
Further, 'a' and 'b' scale the dynamic range of the grey value of the output image with 0 and 255.

Gaussian Stretch

A Gaussian stretch is used to enhance contrast within the tails of the histogram. This method is
called a Gaussian stretch because it involves the fitting of the observed histogram to a normal or
Gaussian histogram. A Gaussian or normal distribution is defined by

f (x) = Ce –σx²

C = (σ/π)0.5

80
GIS Reader

σ, the standard deviation range of x for which f (x) drops by a factor of e-5 or 0.607 of its max.
value and the Max. Value = 1/ (2a)0.5.

Thus, 60.7% of the values of a normally distributed variable lie within one standard deviation of
the mean. In this method of enhancement each pixel value of input image can be converted to the
LUT (Look Up Table) value based on the probability of each pixel value with respect to a class
following the Gaussian law. The normal distribution curve is shown in Graph 3 given below. In
both the cases of contrast enhancement based on histogram analysis of input image values, the
range of levels allocated to the output image exceeds the range of levels of pixel values in the
input image. This results in the overall brightening of the displayed image.

Graph 3: Normal Distribution

In the example, (Fig 2C) the different lava flows are distinguished, and some details within the
dry lake are emphasized. In this method, the enhancement occurs at the expense of contrast in the
middle grey range, the fracture pattern and some of the folds are suppressed in this image.

Density Slicing

Density Slicing is the mapping of a range of contiguous grey levels of a single band image to a
point in the RGB color cube. The DNs of a given band are "sliced" into distinct classes. For
example, for band 4 of a TM 8 bit image, we might divide the 0-255 continuous range into
discrete intervals of 0-63, 64-127, 128-191 and 192-255. These four classes are displayed as four
different grey levels. This kind of density slicing is often used in displaying temperature maps.

81
GIS Reader

16 Image enhancement techniques


Image Enhancement operations are carried out to improve the interpretability of the image by
increasing apparent contrast among various features in the scene. The enhancement techniques
depend upon two factors mainly

• The digital data (i.e. with spectral bands and resolution)


• The objectives of interpretation

As an image enhancement technique often drastically alters the original numeric data, it is
normally used only for visual (manual) interpretation and not for further numeric analysis.
Common enhancements include image reduction, image rectification, image magnification,
transect extraction, contrast adjustments, band rationing, spatial filtering, Fourier
transformations, principal component analysis and texture transformation.

ImageEnhancementTechniques:-

Image Enhancement techniques are instigated for making satellite imageries more informative
and helping to achieve the goal of image interpretation. The term enhancement is used to mean
the alteration of the appearance of an image in such a way that the information contained in that
image is more readily interpreted visually in terms of a particular need. The image enhancement
techniques are applied either to single-band images or separately to the individual bands of a
multiband image set. These techniques can be categorized into two:

• Spectral Enhancement Techniques


• Multi-Spectral Enhancement Techniques

Spectral Enhancement Techniques:-

Density Slicing

Density Slicing is the mapping of a range of contiguous grey levels of a single band image to a
point in the RGB color cube. The DNs of a given band are "sliced" into distinct classes. For
example, for band 4 of a TM 8 bit image, we might divide the 0-255 continuous range into
discrete intervals of 0-63, 64-127, 128-191 and 192-255. These four classes are displayed as four
different grey levels. This kind of density slicing is often used in displaying temperature maps.

Contrast Stretching

The operating or dynamic, ranges of remote sensors are often designed with a variety of eventual
data applications. For example for any particular area that is being imaged it is unlikely that the
full dynamic range of sensor will be used and the corresponding image is dull and lacking in
contrast or over bright. Land sat TM images can end up being used to study deserts, ice sheets,
oceans, forests etc., requiring relatively low gain sensors to cope with the widely varying
radiances upwelling from dark, bright, hot and cold targets. Consequently, it is unlikely that the
full radiometric range of brand is utilized in an image of a particular area. The result is an image
lacking in contrast - but by remapping the DN distribution to the full display capabilities of an
image processing system, we can recover a beautiful image.

82
GIS Reader

Contrast Stretching can be displayed in three categories:

Linear Contrast Stretch

This technique involves the translation of the image pixel values from the observed range DNmin
to DNmax to the full range of the display device (generally 0-255, which is the range of values
representable in an 8bit display devices) This technique can be applied to a single band, grey-
scale image, where the image data are mapped to the display via all three colors LUTs.

It is not necessary to stretch between DNmax and DNmin - Inflection points for a linear contrast
stretch from the 5th and 95th percentiles, or ± 2 standard deviations from the mean (for instance)
of the histogram, or to cover the class of land cover of interest (e.g. water at expense of land or
vice versa). It is also straightforward to have more than two inflection points in a linear stretch,
yielding a piecewise linear-stretch.

Histogram Equalization

The underlying principle of histogram equalization is straightforward and simple, it is assumed


that each level in the displayed image should contain an approximately equal number of pixel
values, so that the histogram of these displayed values is almost uniform (though not all 256
classes are necessarily occupied). The objective of the histogram equalization is to spread the
range of pixel values present in the input image over the full range of the display device.

Gaussian Stretch

This method of contrast enhancement is base upon the histogram of the pixel values is called a
Gaussian stretch because it involves the fitting of the observed histogram to a normal or Gaussian
histogram.
It is defined as follow:
F(x) = (a/p) 0.5 exp (-ax2)

Multi-Spectral Enhancement Techniques:-

Image Arithmetic Operations

The operations of addition, subtraction, multiplication and division are performed on two or more
co-registered images of the same geographical area. These techniques are applied to images from
separate spectral bands from single multispectral data set or they may be individual bands from
image data sets that have been collected at different dates. More complicated algebra is
sometimes encountered in derivation of sea-surface temperature from multispectral thermal
infrared data (so called split-window and multichannel techniques).

Addition of images is generally carried out to give dynamic range of image that equals the input
images.

Band Subtraction Operation on images is sometimes carried out to co-register scenes of the
same area acquired at different times for change detection

83
GIS Reader

Band rationing:-

Band Rationing or Division of images is probably the most common arithmetic operation that is
most widely applied to images in geological, ecological and agricultural applications of remote
sensing. Ratio Images are enhancements resulting from the division of DN values of one spectral
band by corresponding DN of another band. One instigation for this is to iron out differences in
scene illumination due to cloud or topographic shadow. Ratio images also bring out spectral
variation in different target materials. Multiple ratio images can be used to drive red, green and
blue monitor guns for color images. Interpretation of ratio images must consider that they are
"intensity blind", i.e., dissimilar materials with different absolute reflectance’s but similar relative
reflectance’s in the two or more utilized bands will look the same in the output image.

Spatial filtering:-

Spatial filtering is a “local” operation in that pixel values in an original image are modified on the
basis of grey levels of neighboring pixels.
#spatial filters emphasize or de-emphasize image data of various spatial frequencies. (Roughness
of tonal variations in an image)
# Spatial frequency is defined as the number of changes in brightness values per unit distance for
any particular part of image.

HIGH SPATIAL FREQUENCY is for ROUGH AREAS

LOW SPATIAL FREQUENCY is for SMOOTH AREA

• Low pass filter


• High pass filter

84
GIS Reader

Spatial Filtering can be described as selectively emphasizing or suppressing information


at different spatial scales over an image. Filtering techniques can be implemented through
the Fourier transform in the frequency domain or in the spatial domain by convolution.

Spatial filters are of two types-

• Low-pass filters:
• Emphasize large area changes in brightness.
• De-emphasize local detail.
• Reduce random noise.

• High –pass filters;


• Emphasize local detail.
• De-emphasize large area changes in brightness.

High and low frequency spatial filters.

Convolution Filters:-

Filtering methods exists is based upon the transformation of the image into its scale or spatial
frequency components using the Fourier transform. The spatial domain filters or the convolution
filters are generally classed as either high-pass (sharpening) or as low-pass (smoothing) filters.

Low-Pass (Smoothing) Filters

• Low-pass filters reveal underlying two-dimensional waveform with a long wavelength or


low frequency image contrast at the expense of higher spatial frequencies. Low-frequency
information allows the identification of the background pattern, and produces an output
image in which the detail has been smoothed or removed from the original.
• A 2-dimensional moving-average filter is defined in terms of its dimensions which must
be odd, positive and integral but not necessarily equal, and its coefficients. The output DN
is found by dividing the sum of the products of corresponding convolution kernel and
image elements often divided by the number of kernel elements.
• A similar effect is given from a median filter where the convolution kernel is a
description of the PSF weights. Choosing the median value from the moving window
does a better job of suppressing noise and preserving edges than the mean filter.
• Adaptive filters have kernel coefficients calculated for each window position based on the
mean and variance of the original DN in the underlying image.

High-Pass (Sharpening) Filters

Simply subtracting the low-frequency image resulting from a low pass filter from the original
image can enhance high spatial frequencies. High -frequency information allows us either to
isolate or to amplify the local detail. If the high-frequency detail is amplified by adding back to
the image some multiple of the high frequency component extracted by the filter, then the result is
asharper,de-blurredimage.

High-pass convolution filters can be designed by representing a PSF with positive centre weight
and negative surrounding weights. A typical 3x3 Laplacian filter has a kernal with a high central

85
GIS Reader

value, 0 at each corner, and -1 at the centre of each edge. Such filters can be biased in certain
directions for enhancement of edges.

A high-pass filtering can be performed simply based on the mathematical concepts of derivatives,
i.e., gradients in DN throughout the image. Since images are not continuous functions, calculus is
dispensed with and instead derivatives are estimated from the differences in the DN of adjacent
pixels in the x, y or diagonal directions. Directional first differencing aims at emphasizing edges
in image.

Edge enhancement:-

• Edge enhancement is a digital image processing filter that improves the apparent
sharpness of an image or video. The creation of bright and dark highlights on either side
of any line leaves the line looking more contrasted from a distance. The process is most
prevalent in the video field, appearing to some degree in the majority of TV broadcasts
and DVDs. Standard television sets' "sharpness" control is an example of edge
enhancement. It is also widely used in computer printers especially for font or/and
graphics to get a better printing quality.
• Edge enhancement is concerned with the linear features in images. Some linear features
occur as narrow lines against a background of contrasting brightness; others are the linear
contact between adjacent areas of different brightness. In all cases linear features are
formed by edges. Some edges are marked by pronounced differences in brightness and
are readily recognized.
• Edges are marked by subtle brightness differences that may be difficult to recognize.
Contrast enhancement may emphasize brightness differences associated with some linear
features.
• Edge-enhancement images attempt to preserve local contrast and low frequency
brightness information.
• High frequency component image is produced using the appropriate kernel size.
• All or a fraction of the grey level in each pixel is added back to high-frequency
component image.
• The composite image is contrast-stretched.
• Standard television sets’ “sharpness” control is an edge enhancement.

Digital filters have been developed specifically to enhance edges in images and fall in to two
categories: directional and non directional filters.

86
GIS Reader

17 Image classifications-Supervised classification


Image classification refers to the computer-assisted interpretation of remotely sensed images. The
objective of image classification procedures is to automatically categorize all pixels in an image
into land cover classes or themes. For categorization, the spectral pattern present within the data
for each pixel is used as the numerical basis. The term pattern refers to the set of radiance
measurements obtained in the various wavelength bands for each pixel.

The family of classification procedures can be categorized into


Spectral pattern recognition
Spatial pattern recognition
Temporal pattern recognition

1. Spectral pattern recognition:-

This procedure utilizes the pixel by pixel spectral information as the basis for automated land
cover classification.

2. Spatial pattern recognition:-

This procedure involves the categorization of image pixels on the basis of their spatial
relationship with pixels surrounding them. Aspects such as image texture, pixel proximity, feature
size, shape, directionality, repetition and context are covered in this procedure.

3. Temporal pattern recognition:-

This procedure uses time as an aid in feature identification. Data is analyzed from imagery
recorded on different dates. This is particularly pertinent in the case of crop surveys as their
imagery will go through changes during the growing season.
These procedures can be combined when the need arises. Depending on the nature of the data
being analyzed, the computational resources available and the intended application of the
classified data, the approach or the procedure can be arrived at.
The two main approaches in multi-spectral classification activities can be identified as

Supervised classification
Unsupervised classification

In the case of supervised classification, the software system delineates specific land cover types
based on statistical characterization data drawn from known examples in the image (known as
training sites). With unsupervised classification, however, clustering software is used to uncover
the commonly occurring land cover types, with the analyst providing interpretations of those
cover types at a later stage.
When the accuracy and efficiency of the classification process needs to be improved, then aspects
of both supervised and unsupervised classification can be combined to arrive at a hybrid
classification procedure.

SUPERVISED CLASSIFICATION:

87
GIS Reader

A typical supervised classification process involves three basic steps:

1. Training stage:–

• The analyst identifies representative training areas and develops numerical descriptions of
the spectral signatures of each land cover type of interest in the scene. This is also called
signature analysis.
• The actual classification of multispectral image data is a highly automated process.
However, assembling the training data needed for classification requires close interaction
between the image analyst and the image data. It also requires substantial reference data
and a thorough knowledge of the geographic area to which the data apply. The quality of
the training process determines the success of the classification stage and thereby the
value of the information generated from the entire procedure.
• It is during the training stage that the location, size, shape and orientation of the several
points for each land cover class are determined.
• The training data must be representative and complete. This implies that the image
analyst must develop training statistics for all spectral classes constituting each
information class to be discriminated by the classifier. For example, an information class
such as agriculture will contain different crop types and each crop type might be
represented by several spectral classes. These spectral classes would arise from different
planting dates, soil moisture conditions, crop management practices, seed varieties and
several other factors and their combinations.

2. The classification stage:–

Each pixel in the image data set is categorized into the land cover class it mostly resembles. If the
pixel is insufficiently similar to any training data, it is usually labeled ‘unknown’. Classifiers are
the techniques used for making these decisions about the resemblances. There are three different
kinds of classifiers; hard, soft and hyper spectral.

Hard classifier:-

The distinguishing characteristic of hard classifiers is that they all make a definitive decision
about the land cover class to which any pixel belongs. IDRISI offers three supervised classifiers
in this group: Parallelepiped (PIPED), Minimum Distance to Means (MINDIST), and Maximum
Likelihood (MAXLIKE). They differ only in the manner in which they develop and use a
statistical characterization of the training site data. Of the three, the Maximum Likelihood
procedure is the most sophisticated, and is unquestionably the most widely used classifier in the
classification of remotely sensed imagery.

Soft classifier:-

Contrary to hard classifiers, soft classifiers do not make a definitive decision about the land cover
class to which each pixel belongs. Rather, they develop statements of the degree to which each
pixel belongs to each of the land cover classes being considered. Thus, for example, a soft
classifier might indicate that a pixel has a 0.72 probability of being forest, a 0.24 probability of
being pasture, and a 0.04 probability of being bare ground. A hard classifier would resolve this
uncertainty by concluding that the pixel was forest. However, a soft classifier makes this
uncertainty explicitly available, for any of a variety of reasons. For example, the analyst
might conclude that the uncertainty arises because the pixel contains more than one cover
type and could use the probabilities as indications of the relative proportion of each. This
is known as sub-pixel classification. Alternatively, the analyst may conclude that the
uncertainty arises because of unrepresentative training site data and therefore may wish to

88
GIS Reader

combine these probabilities with other evidence before hardening the decision to a final
conclusion.

Hyper spectral classifier:-

All of the classifiers mentioned above operate on multispectral imagery—images where


several spectral bands have been captured simultaneously as independently accessible
image components. Extending this logic to many bands produces what has come to be
known as hyper spectral imagery. Although there is essentially no difference between
hyper spectral and multispectral imagery (i.e., they differ only in degree), the volume of
data and high spectral resolution of hyper spectral images does lead to differences in the
way that they are handled.

3. The output stage:-

The typical forms of output products are thematic maps, tables and digital data files,
which become input data for GIS.
The figure given below shows the flow of operations to be performed

89
GIS Reader

Unsupervised classification:

• This procedure examines the data and breaks it into the most prevalent natural spectral
groupings, or clusters, present in the data. The analyst then identifies these clusters as
land cover classes through a combination of familiarity with the region and ground truth
visits. The logic by which unsupervised classification works is known as cluster analysis.

• In contrast to supervised classification, where the system needs to be told about the
character (i.e., signature) of the information classes we are looking for, unsupervised
classification requires no advance information about the classes of interest. It is important
to recognize, however, that the clusters unsupervised classification produces are not
information classes, but spectral classes (i.e., they group together features (pixels) with
similar reflectance patterns). It is thus usually the case that the analyst needs to reclassify
spectral classes into information classes. For example, the system might identify classes
for asphalt and cement which the analyst might later group together, creating an
information class called pavement.
• Access to efficient hardware and software is an important factor in determining the ease
with which an unsupervised or supervised classification can be performed. The quality of
the classification will depend upon the analyst’s understanding of the concepts behind the
classifiers available and knowledge about the land cover types under analysis.

Hybrid classification:-

• This form of classification has been developed to improve he accuracy of purely


supervised or unsupervised procedures. For example, unsupervised training areas might
be delineated in an image in order to aid the analyst in identifying the numerous spectral
classes that need to be defined in order to adequately represent the land cover information
classes to be differentiated in a supervised classification. Unsupervised training areas are
image sub-areas chosen intentionally to be different from supervised training areas.
• Hybrid classifiers are particularly valuable in analyses where there is complex variability
in the spectral response patterns for individual cover types present. These conditions are
quite common in applications such as vegetation mapping. Guided clustering is a hybrid
approach that has been proved to be very effective in such circumstances.

90
GIS Reader

18 Image Classification: Discriminant Functions: Maximum


Likelihood Classifier, Euclidian Distance, Mahalanobis
Distance.

Euclidean Distance
The distance between two points is the length of the path connecting them. In the plane, the
distance between points (x1, y1)and (x2,y2)is given by the Pythagorean theorem,

In Euclidean three-space, the distance between points (x1,y1,z1)and (x2,y2,z2)is

Mahalanobis distance

Given two "points" xl and x2 defined by numerical attributes (e.g., two observations), the
distance between these two points will be given by using the traditional euclidian distance:

Given multivariate normal distribution, one defmes the (square of the) Mahalanobis
distance of an observation X to the barycenter g of the distribution as follows:

with ∑ as the covariance matrix of the distribution.

Two observations sitting in regions with same density are at the same (Mahalanobis) distance
from the barycenter (although their euclidean distances from the barycenter may be quite
different). Points that are at a given mahalanobis distance from the barycenter sit on an ellipsoid
centered on the barycenter.

91
GIS Reader

Example of application of Mahalanobis Distance

An automated approach to mapping corn from Landsat imagery


1. Introduction
Knowledge of the spatial distribution of specific crop types is important for many environmental
and health studies. For example, once the location of crops is determined, parameters such as
pesticide use can be estimated and incorporated into an environmental model for exposure
assessment for health studies (Ward et al., 2000). Such maps covering extensive geographical
regions can only be derived from satellite imagery.Landsat satellite imagery has been success-
fully used to classify many different crop types.
2. Methods
2.1. Study area and data description
Four counties in south central Nebraska were selected for our study: Hall, Kearney, Nuckolls, and
Thayer. The crops grown in these four counties represent the dominant crops grown in south
central Nebraska which include corn, sorghum, soybeans, and winter wheat.

2.2. Classification methodology


The classification process involves three steps. The first step is to identify representative samples
of corn in the Landsat image from which to derive the spectral training pattern for corn. This corn
spectral training pattern is then compared to every pixel in the image and the spectral distance
between them is calculated. This distance measurement is then refined in the final step into three
classes (‘highly likely corn,’ ‘likely corn,’ and ‘unlikely corn’).
The first step, corn spectral training pattern calculation, is accomplished by identifying a specific
county (or sub-region) within the Landsat image from which to collect a representative sample of
corn pixels. Selection of this county is based on two criteria: (1) the country with the highest
proportion of corn as compared to other crops grown and (2) the country with the highest number
of corn hectares grown. This ensures that the dominant spectral tone within the sub-image
selected will represent corn. Hall County was chosen in our study because it met both criteria for
selection. Twenty contiguous samples were selected from the bivariate histogram of the red
visible band (band 2) and the near infrared band (band 4) of the Landsat image for Hall County.
Samples were collected beginning at the highest point in the bivariate histogram (band 2 =15,
band 4 =58) and proceeded with the next highest point until twenty samples were selected. These
samples were then used to calculate the spectral response pattern for corn
The Mahalanobis distance measurement (Duda and Hart, 1973) for each pixel in the Landsat
image is then calculated using the corn spectral training pattern. The Mahalanobis distance
measurement (Duda and Hart, 1973) is used in our method to determine the ‘likelihood’ that an
individual pixel is corn. The Mahalanobis distance represents the spectral distance from the
original corn training pattern to an individual pixel and therefore this distance can be used to
determine how likely the pixel is to be corn. Pixels that have low distance values are more likely
to be corn and pixels with high values are less likely to be corn. Assigning this confidence label at
the pixel level is important for identifying potential
errors in estimating chemical exposure.
Agricultural areal estimates are used in the final step to refine the Mahalanobis distance
measurement to one of three categories: highly likely to be corn, likely to be corn, or unlikely to
be corn. NASS areal estimates for corn are used to determine cutoff points bycomparing the total
acreage of corn grown in a particular county to the acreage represented by each distance value.
We classified pixels as ‘highly likely to be corn’ for distance values representing up to
approximately 75.0% of the total acreage of corn. Pixels classified as ‘likely to be corn’ were
distance values representing the remaining 25.0% of the total acreage for corn. All other pixels
were classified as ‘unlikely to be corn.’ The 75.0% cutoff value was based on a sensitivity
analysis performed on the three test counties through a trial and error process.
Pixels with Mahalanobis distance values from 1 to 42 are classified as highly likely to be corn,
because the cumulative total number of hectares are approximately 75.0% of the acreage

92
GIS Reader

estimated by the NASS. Distance values from 43 through 111 are classified as likely to be corn
because the cumulative total of the acreage for these pixels constituted the remaining 25.0% of the
acreage estimated by NASS.
Distance values greater than 111 are classified as unlikely to be corn.

3. Results
Overall average accuracy (correctly classified samples for all classes

divided by total number of samples) was 92.2% with individual county accuracies ranging from
90.1 to96.5%.
4. Discussion
The results of our study indicate that an automated approach to classifying corn from Landsat
satellite imagery may be feasible. The primary advantage of this method is the ability to perform
rapid interpretation of the satellite imagery without the need for ground reference data to ‘train’
the classification algorithm. This is especially important in creating historical maps, because
ground reference data may not be available.

Example of application of Euclidean Distance

Ground Motion Amplification of Soils in the Upper Mississippi Embayment

Site Classification Using Remote Sensing Imagery

The correlation between strong ground motions and geology was identified in the mid-1800s (Del
Barrio, 1855; Mallet, 1862). Recent studies by Borcherdt (1994) and Anderson et al. (1996) have
quantified the influence of near-surface These studies suggest using geology as an initial regional
classification for seismic zonation. In this study, the use of remote sensing imagery for regional
classification is evaluated. In particular, the objective is to identify Holocene-age deposits that
may be susceptible to ground motion amplification. Site response is then determined for
Holocene-age and Pleistocene-age deposits in the Mississippi Embayment based on additional
subsurface information.

Holocene-age alluvial deposits in the floodplains are distinguished from loess deposits of
Pleistocene/Pliocene age in the inland, terrace regions based on spectral contrast and texture.
Agbu et al. (1990) observed that spectral reflectance is related to subsurface conditions since
subsurface conditions affect the properties observed at the surface. The variation in soil type,
moisture content, and geology influences the spectral reflectance and texture. Therefore, spectral
reflectance and texture are the basis for classification in this study.

Landsat TM Images

The Landsat Thematic mapper (TM) is a multispectral satellite measuring electromagnetic energy
in seven spectral bands ranging from the visible to the thermal infrared. Each pixel represents an
area 30 m by 30 m for six of the seven bands whereas pixels in the thermal infrared band
represent an area 120 m by 120 m. An image from the Landsat TM satellite was selected to
assess the feasibility of using satellite imagery for identifying regions susceptible to ground
motion amplification. In particular, imagery was analyzed to distinguish between Holocene-age
and Pleistocene-age deposits. Holocene-age deposits are susceptible to ground motion
amplification due to the loose, unconsolidated state of deposition. In the Central United States,
Holocene-age deposits are found throughout the floodplains of major rivers. Pleistocene-age
deposits are located in the upland, terrace regions. Analysis of imagery focused on
distinguishing between the two geologic deposits.

93
GIS Reader

The Landsat TM image was obtained from the USGS Earth Resources Observation Systems
(EROS) Data Center and georeferenced to the Universal Transverse Mercator (UTM) coordinate
system that is based on the North American Datum of 1927. The image was obtained on
November 22, 1986 from the Landsat TM 5 satellite launched in March 1984. Autumnal images
were selected due to the lack of vegetation cover allowing imaging of the surface geology. A
portion of the acquired image is shown in Figure 1.

Figure 1 Part of Landsat TM image acquired showing the Jackson Purchase area of western
Kentucky.

Study Area

The study area was selected to evaluate the use of Landsat TM imagery for regional seismic
zonation and is located northeast of the NMSZ. The study area is a subset of the area in Figure 1
and is located in the Jackson Purchase region of western Kentucky. The study area is bounded
by the Ohio River to the northwest and the Mississippi River to the southwest. Figure 2 shows
the selected study area including parts of Kentucky, Missouri, and Illinois and is composed of
1000 by
1000
pixels.

Figure 2 Principal component Figure 3 Study area selected for analysis


image used for analysis shown in false color

94
GIS Reader

Spectral Classification

The first approach to classification or segmentation is based on the pixel brightness values or
relative spectral reflectance of the image. Histogram equalization was used to enhance the
contrast in the image. The image in Figure 3 was then passed through a low-pass filter to reduce
the effect of cultural boundaries and agricultural features and enhance geologic features. The
image was then classified by image segmentation where low pixel values (dark pixels) were
labeled Holocene-age deposits and high pixel values (white pixels) were labeled Pleistocene-age
deposits. The result of this classification is shown in Figure 4.

Figure 4 Result of spectral classification

Texture Classification

Texture is related to patterns in pixel brightness values. Several approaches have been applied to
quantify textural analysis including first-order and second-order statistics, directional filters, and
fractal geometry. First-order statistics include calculating the mean and standard deviation of a
pixel cluster. First-order statistics are used in this study to quantify texture and are described
below. The statistics of a 35 by 35 pixel neighborhood were compared with the mean of
identified Holocene-age and Pleistocene-age regions. The minimum Euclidean distance was used
to classify pixels. The results of the texture classification are shown in Figure 5.

Figure 5 Result of texture classification.

95
GIS Reader

19. Image classification- Unsupervised classification


What is Image classification ?

The intent of the classification process is to categorize all pixels in a digital image into one of
several land cover classes, or "themes". This categorized data may then be used to produce
thematic maps of the land cover present in an image. Normally, multispectral data are used to
perform the classification and, indeed, the spectral pattern present within the data for each pixel is
used as the numerical basis for categorization. The objective of image classification is to identify
and portray, as a unique gray level (or color), the features occurring in an image in terms of the
object or type of land cover these features actually represent on the ground.

Image classification is perhaps the most important part of digital image analysis. It is very nice to
have a "pretty picture" or an image, showing a magnitude of colors illustrating various features of
the underlying terrain, but it is quite useless unless to know what the colors mean. Two main
classification methods are Supervised Classification and Unsupervised Classification.

Automatic classification of the pixels making up a remotely-sensed image involves associating


each pixel in the image with a label describing a real-world object. It is a problem of recognition
in that the numerical values associated with each pixel are normally required to be identified in
terms of an observable geographical, geological or other Earth-surface cover type (though the
objects of interest in other cases may be cloud types or water-quality classes). For example, a
pixel may have quantized values of {30, 20, 12, 10} in Landsat-4 MSS bands 1 to 4 respectively.
The user may expect an automatic classification procedure to give that pixel the label "water" or
"shadow" on the basis of comparison with the spectral reflectance characteristics ("spectral
signatures") of objects known to occur in the study area. If this labelling operation is carried out
for all pixels in the area then the result is a thematic map, showing the geographical distribution of
a "theme" such as vegetation type or water quality rather than the multifarious details associated
with each place, as represented on a topographic map. A classified remotely-sensed image is thus
a form of digital thematic map and, if the geometry is transformed so as to match a recognized
map projection, it is in a form suitable for incorporation into a digital geographic information
system.

A set of values for a single pixel on each of a number of spectral bands, such as (30, 20, 12, 10},
is often referred to as a pattern. The characteristics or variables (such as Landsat-4 MSS bands 1,
2, 3 and 4) which define the basis of the pattern are called features. A pattern is thus a set of
measurements on the chosen features. Hence the classification process can be described as a form
of pattern recognition, or the identification of the pattern associated with each pixel position in an
image in terms of characteristics of the objects or materials at the corresponding point on the
Earth's surface. Pattern recognition methods have found widespread use in fields other than
environmental remote sensing; for example, military applications include the identification of
approaching aircraft and the detection of targets for cruise missiles. Robot or computer vision
involves the use of mathematical descriptions of objects "seen" by a television camera
representing the robot eye, and the comparison of these mathematical descriptions with patterns
describing objects in the real world. In every case, the crucial steps are (i) selection of the
particular features which best describe the pattern and (ii) choice of a suitable method for the
comparison of the pattern describing the object being classified and the target patterns. In remote
sensing applications it is usual to include a third stage, that of assessing the degree of accuracy of
the allocation process.

96
GIS Reader

Unsupervised classification

Unsupervised classification is a method which examines a large number of unknown pixels and
divides into a number of classed based on natural groupings present in the image values. unlike
supervised classification, unsupervised classification does not require analyst-specified training
data. The basic premise is that values within a given cover type should be close together in the
measurement space (i.e. have similar gray levels), whereas data in different classes should be
comparatively well separated (i.e. have very different gray levels). The classes that result from
unsupervised classification are spectral classed which based on natural groupings of the image
values, the identity of the spectral class will not be initially known, must compare classified data
to some form of reference data (such as larger scale imagery, maps, or site visits) to determine the
identity and informational values of the spectral classes. Thus, in the supervised approach, to
define useful information categories and then examine their spectral separability; in the
unsupervised approach the computer determines spectrally separable class, and then define their
information value.

Satellite Image Classified Image with 10 Classes

Unsupervised classification is becoming increasingly popular in agencies involved in


long term GIS database maintenance. The reason is that there are now systems that use clustering
procedures that are extremely fast and require little in the nature of operational parameters. Thus
it is becoming possible to train GIS analysis with only a general familiarity with remote sensing to
undertake classifications that meet typical map accuracy standards. With suitable ground truth

97
GIS Reader

accuracy assessment procedures, this tool can provide a remarkably rapid means of producing
quality land cover data on a continuing basis.

Classification accuracy assessment

Classification accuracy assessment is a general term for comparing the classification to


geographical data that are assumed to be true to determine the accuracy of the classification
process. Usually, the assumed true data are derived from ground truth. It is usually not practical to
ground truth or otherwise test every pixel of a classified image. Therefore a set of reference pixels
is usually used. Reference pixels are points on the classified image for which actual data are (will
be) known. The reference pixels are randomly selected.

Once a classification exercise has been carried out there is a need to determine the degree of error
in the end-product. These errors could be thought of as being due to incorrect labeling of the
pixels.

The basic idea is to compare the predicted classification (supervised or unsupervised) of each
pixel with the actual classification as discovered by ground truth.

Four kinds of accuracy information:

I. Nature of the errors: what kinds of information are confused?


II. Frequency of the errors: how often do they occur?
III. Magnitude of errors: how bad are they? E.g., confusing old-growth with second-
growth forest is not as ‘bad’ an error as confusing water with forest.
IV. Source of errors: why did the error occur?

The Confusion Matrix (Error Matrix)

The most commonly-used method of representing the degree of accuracy of a classification is to


build a confusion (or error) matrix.

The analyst selects a sample of pixels and then visits the sites (or vice-versa), and builds a
confusion matrix: (IDRISI module CONFUSE.). This is used to determine the nature and
frequency of errors.

columns = ground data (assumed ‘correct’)

rows = map data (classified by the automatic procedure)

cells of the matrix = count of the number of observations for each (ground, map) combination

diagonal elements = agreement between ground and map; ideal is a matrix with all zero off-
diagonals

errors of omission (map producer’s accuracy) = incorrect in column / total in column. Measures
how well the map maker was able to represent the ground features.

98
GIS Reader

errors of commission (map user’s accuracy) = incorrect in row / total in row. Measures how likely
the map user is to encounter correct information while using the map.

Overall map accuracy = total on diagonal / grand total

Statistical test of the classification accuracy for the whole map or individual cells is possible using
the kappa index of agreement. This is like a c ? test except that it accounts for chance agreement.

This method stands or falls by the availability of a test sample of pixels for each of the k
classes. The use of training-class pixels for this purpose is dubious—one cannot logically train
and test a procedure using the same data. A separate set of test pixels should therefore be used for
the calculation of classification accuracy. Users of the method should be cautious in interpreting
the results if the ground data from which the test pixels were identified were not collected on the
same date as the remotely-sensed image, for crops can be harvested or forests cleared. So far as
possible the test pixel labels should adequately represent reality.

99
GIS Reader

20. Visual Image Analysis: Elements of Image Interpretation


& Referencing Scheme of IRS Satellite
Visual Image Analysis: Elements of Image Interpretation:-

Remote sensing is defined as the science and art of acquiring information about material objects
without being in touch with them. These measurements are possible because sensors or
instruments are designed to measure the spectral reflectance of earth objects. It is discovered that
each earth cover has its own spectral reflectance characteristics. The characteristics are so
unique that they are called "signature" which enable us to discern the objects from its intermixed
background.
The final remote sensing process is completed by the analysis of the data using image
interpretation techniques. Some key elements, or cues from the imagery, such as shape, size,
pattern, tone, colour, shadow and association, are used to identify a variety of features on earth.
The techniques of remote sensing and image interpretation yield valuable information on earth
resources. The different image interpretation elements are discussed below,

Shape
It is the general form, configuration and outline of the feature. In case of stereoscopic photographs
the object height is also important which helps identify the shape of the object. The shape may not
be regular but it is very effective for image interpretation

Size
The size of an object in a photograph is determined by the scale of the photograph. The sizes of
different object in the same photograph help the interpreter to identify the object in many cases.

Pattern
It is the spatial arrangements of objects. The repetition of certain general forms of many natural or
constructed objects form the pattern that helps to recognize the photo. The pattern can be regular,
curvilinear or meandering. For example in case of a river it generally shows the meandering
pattern and the pattern of the agricultural land is regular in most of the cases.

Colour
The colour difference is very effective for identifying any object. For example the colour of river
water in an aerial photography appears as black or dark grey while a road emits white or light
grey colour.

Tone (or Hue)


It means the relative brightness of any object which is proved to be very effective for
distinguishing different object. Without identifying the tonal difference it’s very difficult to
differentiate two different objects by its shape, pattern or texture.

Texture
It is the frequency of the tonal change on the image. It is basically the combination of shape, size,
pattern, shadow and tone. Texture is produced by adding all the unit features which may be too
small to identify individually in a photograph. For example, it’s very difficult to identify the
feature of each leaf in a tree or its shadow. The texture gives the overall visual smoothness or
coarseness of image features. The texture also varies with the scale. If we start reducing the scale
of any image after certain limit the texture of the object in the image will become progressively
finer and ultimately disappears. The object with similar reflectance can also be identified by its

100
GIS Reader

texture. For example, the green grass and the rough textured green tree can be easily distinguished
by its different texture.

Shadow
Shadows are important for two opposing reasons,
1. The shape of a shadow gives an impression of the profile view of the objects which aids
interpretation
2. Objects within shadows reflect little light and are difficult to discern on photographers
which hinders interpretation.
Shadows from subtle variations in terrain elevations, especially in the case of low sun angle
photographs, can aid in assessing natural topographic variations that may be diagnostic of various
geologic landforms.

Site
It means the geographic or topographic location. It is mostly important for identification of
vegetation types.

Association
It means the occurrence of any object in a photograph in relation to the other.

Referencing Scheme of IRS Satellite:-

Referencing scheme which is unique for each IRS satellite mission is a means of conveniently
identifying the geographic location of points on the earth. This scheme is designated by Path and
Rows. The Path-Row concept is based on the nominal orbital characteristics.

Path

An orbit is the course of motion taken by the satellite in space and the ground trace of the orbit is
called a 'Path'. In a 24 day cycle, the satellite completes 341 orbits with an orbital period of
101.35 minutes. This way, the satellite completes approximately 14 orbits per day. Though the
number of orbits and paths are the same, the designated path number in the referencing scheme
and the orbit number are not the same. On day one (D1), the satellite covers orbit numbers 1 to
14, which as per the referencing scheme will be path numbers 1, 318, 294, 270, 246, 222, 198,
174, 150, 126, 102, 78, 54 and 30, assuming that the cycle starts with path 1. So orbit 1
corresponds to path 1, orbit 2 to path 318, orbit 3 to path 294 etc. The fifteenth orbit or first orbit
of day two (D2), is path 6 which will be to the east of path 1 and is separated from path 1 by 5
paths.

Path number one is assigned to the track which is at 29.7 deg West longitude. The gap between
successive paths is 1.055 deg. All subsequent orbits fall westward. Path 1 is so chosen, that the
pass with a maximum elevation greater than 86 deg for the data reception station of NRSA at
Shadnagar can be avoided. This is due to the limitation of antenna drive speed, since it is difficult
to track the satellite around zenith. In fact, above 86 deg elevation, if a pass occurs, the data may
be lost for a few seconds around zenith. Hence, the path pattern is chosen such, that the overhead
passes over the data reception station is reduced to a minimum. To achieve this, path 1 is
positioned in such a manner that the data reception station is exactly between two nominal paths,
namely 99 and 100. During operation, the actual path may vary from the nominal path pattern due
to variations in the orbit by perturbations. Therefore, the orbit is adjusted periodically, after
certain amount of drift, to bring the satellite into the specified orbit.

101
GIS Reader

The path pattern is controlled within ±5 km about the nominal path pattern. Due to this movement
of actual paths within ±5 km about the nominal path, it is not possible to totally avoid above 86
deg elevation passes for Hyderabad. However, with this approach, the number of passes above 86
deg elevation is reduced to almost one in a 24 days cycle.

Row

Along a path, the continuous stream of data is segmented into a number of scenes of convenient
size. While framing the scenes, the equator is taken as the reference line for segmentation. The
scenes are framed in such a manner that one the scenes' centre lies on the equator. For example, a
LISS-III scene, consisting of 6000 lines, is framed such that the centre of the scene lies on the
equator. The next scene is defined such that its centre lies exactly 5,703 lines from the equator.
The centre of next scene is then defined 5,703 lines northwards and so on. This is continued up to
81 deg North latitude. The lines joining the corresponding scene centers of different paths are
parallel to the equator and are called Rows. The uniformly separated scene centers are such that
same rows of different paths fall at the same latitude. The row number 1 falls around 81 deg
North latitude, row number 41 will be near 40 deg North and row number of the scene lying on
the equator is 75. The Indian region is covered by row numbers 30 to 90 and path numbers 65 to
130.

LISS-III and PAN scenes

Use of Referencing Scheme

1. The Path-Row referencing scheme eliminates the usage of latitude and longitudes and
facilitates convenient and unique identification of a geographic location
2. Useful in preparing accession and product catalogues and reduces the complexity of data
products generation
3. Using the referencing scheme, the user can arrive at the number of scenes that covers his
area of interest. However, due to orbit and attitude variations during operation, the actual
scene may be displaced slightly from the nominal scene defined in the referencing

102
GIS Reader

scheme. Hence, if the user's area of interest lies in border region of any scene, the user
may have to order the overlapping scenes in addition to the nominal scene.

Comparison between IRS-1A/1B and IRS-1C Referencing Scheme:-

The referencing scheme of IRS-1C is different from that of IRS-1A/1B. In the IRS-1C referencing
scheme, the adjacent path occurs after five days and not on the next day as in the case IRS-1A/1B.
This type of referencing scheme has been chosen keeping in view the PAN sensor, so that the
revisit capability of 5 days can be met. The following table gives the major differences in terms of
referencing scheme pattern of IRS-1C from IRS-1A/1B.

IRS-1A/1B IRS-1C
Altitude 904 km 817 km
Repetivity 22 days 24 days
Consecutive path D + 1 day D + 5 days
Numbering of paths East to West West to East
Total number of orbits/cycle 307 341

Difference in referencing scheme pattern of IRS-1C and IRS-1A/1B

IRS-1C and 1D have slightly different orbits and for this reason do not have the same reference
system.
The mean equatorial crossing time in the descending node is 0.30 a.m. ± 5 minutes. The orbit
adjust system is used to attain the required orbit initially and it is maintained throughout the
mission period. The ground trace pattern is controlled within ± 5 km of the reference ground trace
pattern.

103
GIS Reader

21. Significance of Spatial Analysis and Overview of Spatial


Analysis Tools

Spatial Analysis:

GIS is designed to support a range of different kinds of analysis of geographic information: techniques to
examine and explore data from a geographic perspective, to develop and test models, and to present data
in ways that lead to greater insight and understanding. All of these techniques fall under the general
umbrella of "spatial analysis"

Significance of Spatial Analysis:

• Using Spatial Analyst GIS users can create, query, map and analyze cell-based raster data, derive
new information from existing data.
• Information about geospatial data such as terrain analysis, spatial relationship and suitable
locations can be obtained using spatial Analyst. ArcGiS spatial Analyst integrates real-world
variables such as elevation into the geospatial environment to help solve complex problems.
• Arc GIS spatial Analyst bridges the gap between a simple map on a computer and real-world
analysis for deriving solutions to complex problems

• Data Integration: ArcGIS Spatial Analyst integrates the user's data enabling interaction between
data of many different types, images, elevation models and other raster surfaces can be combined
with CAD data, vector data internet data and many other formats to provide integrated analysis.

• Visualization: In addition to high-powered analysis and modeling, spatial Analyst also allows
analyst to visualize their data as never before. ArcGIS Spatial Analyst is integrated with Arc Map
so that the user can create stunning visual displays with the powerful symbology and annotations
options available.

• Sophisticated Raster Data Analysis: ArcGIS spatial analyst provides a robust environment for
advance raster data analysis. This environment enables density mapping, distance analysis,
surface analysis, grid statistics, spatial modeling and surface creation.

• Query: A key component of spatial Analyst is the ability to perform queries across different
raster data sets in the raster calculator. This allows the analyst to ask questions that cover level of
information. For example: What areas are zoned for residential development and have high water
table on a steep slope greater than 15%. The query functionality gives the analyst the ability to
leverage existing data and to make more informed decisions.

104
GIS Reader

• Terrain Analysis: With Spatial Analyst anyone can derive useful information such as hill shade,
contour slope, view shed or aspect map. The topographic surfaces give the user the power to
relate their data to the real world elevations and analyze.

• Spatial Modeling: ArcGIS Spatial Analyst provides the ability to create more sophisticated
spatial models for many different geospatial problems. Some of the process models of Spatial
Analyst include

Suitability Modeling: Most spatial models involve finding optimum locations such as finding the
best location to build a new school, landfill, or resettlement site.

Hydrological Modeling: Where will the water flow to?

Surface Modeling: What is ozone pollution level for various locations in a country?

With ArcGIS Spatial Analyst tools, one can:-

• Find suitable locations


• Calculate the accumulated cost of traveling from one point to another
• Perform land use analysis
• Predict fire risk
• Analyze transportation corridors
• Determine pollution levels
• Perform crop yield analysis
• Determine erosion potential
• Perform demographic analysis
• Conducts risk assessments
• Model and visualize crime patterns

OVERVIEW OF SPATIAL ANALYST

ArcGIS Spatial Analyst provides a rich set of tools to perform cell-based (raster) analysis.

105
GIS Reader

Toolset Description

Conditional • The conditional tools allow for control of the output values based on the
conditions placed on the input values.

• The conditions that can be applied are either attribute queries or a


condition that is based on the position of the conditional statement in a
list.

• A simple attribute query might be: If a cell value is greater than 5,


multiply it by 10; otherwise, assign 1 to the location

Conversion • When feature data is to be convened into raster data, or if raster data
needs to be converted into another format, Conversion tools are used.

Density • Calculation of density spreads point values out over a surface.

• The magnitude at each sample location (line or point) is distributed


throughout a landscape, and a density value is calculated for each cell in
the output raster.

• For example, density analysis will take population counts assigned to


town centers and distribute the people throughout the landscape more
realistic
.
Distance • There are two main ways to perform distance analysis in ArcGIS
Spatial Analyst: Euclidean distance and cost distance.

• The Euclidean distance functions measure straight-line distance from


each cell to the closest source, (The source identifies the objects of
interest, such as wells, roads, or a school.)

• The cost distance functions (or cost-weighted distance) modify


Euclidean distance by equating distance as a cost factor, which is the
cost to travel through any given cell. For example, it may be shorter to
climb over the mountain to the destination, but it is faster to walk
around it.

Geometric • The Geometric Transformation tools are used to manage and


Transformation manipulate the geometry of raters. There are three main groups of
Geometric Transformation tools:

106
GIS Reader

• those that change the geometry of the dataset through projections and
georeferencing (geometric transformation)
• those that change the orientation of the raster
• those that combine several adjacent raster into a single raster
Groundwater • The groundwater tools can be used to perform bas advection-
distribution modeling of constituents in groundwater.

• The Groundwater tools can be applied individual^ or used in sequence


to model and analyze groundwater flow

Interpolation • Surface interpolation functions create a continuous (or prediction)


surface from sampled point values.

• The continuous surface representation of a raster dataset represents


height, concentration, or magnitude (for example, elevation, pollution,
or noise).

• Surface interpolation functions make predictions from sample


measurements for all locations in a raster dataset whether or not a
measurement has been taken at the location.

Math • The ArcGIS Spatial Analyst provides a full set of mathematical


operators and functions.

• These operators and functions allow for the arithmetic combining of


the values in multiple raters, the mathematical manipulation of the
values in a single input raster, the evaluation of multiple input raster, or
the evaluation and manipulation of values in the binary format.

Raster Creation • The Value Creation functions create new rasters in which the output
values are based on a constant or a statistical distribution.

• The Create Constant Raster function creates an output raster of


constant values within a specified map extent and cell size.

• The Create Normal Raster function assigns values to an output raster


so the values produce a normal distribution. The Create Random Raster
(or the Map Algebra Rand) function randomly assigns values to cells on
an output raster

Reclass • Reclassifying data simply means replacing input cell values with new
output cell values.

• There are many reasons to reclassify data. Some of the most common
reasons are, replace values based on new information, group certain

107
GIS Reader

values together, reclassify values to a common scale (for example, for


use in a suitability analysis or for creating a cost raster for use in the
Cost Distance function), set specific values to Notate, or set No Data
cells to a value. There are several approaches to reclassifying data:
* by individual values,
* by ranges,
* by intervals or area,
* Or through an alternative value.

Surface • Additional information can be gained by producing a new dataset that


identifies a specific pattern within an original dataset.

• Patterns that were not readily clear in the original surface can be
derived, such as contours, angle of slope, steepest down slope
direction (aspect), shaded relief (hill shade), and view shed.

108
GIS Reader

22 Surface analysis-Interpolation method


Introduction
Spatial interpolation is done to estimate the value objects in unsampled sites within
areas having existing observations. For interpolation we need to be able to calculate slopes,
aspects and cross sections and to predict unknown elevations for objects that occur at places for
which we do not have elevational data. Interpolation provides much of what is needed to perform
these operations.

Methods of interpolation:
• LINEAR INTERPOLATION
• NON LINEAR INTERPOLATION

Figure no -1: Linear interpolation

Linear interpolation is the methods of assigning values between points of know elevation spread
over an area. We are looking at a single line transect of data point that range between 100 feet in
elevation and 150 feet in elevation. If we assume that the surface changes in a linear fashion, just as in
a simple series, and we have a linear progression, it is predetermining the number of sample data
points or selecting a certain number of points in quadrants or even octants.

Figure no-2: Non linear; distance weighted interpolation

Measure the distance between each pair of points and from every kernel or starting
points. The elevation values at each point are then weighted by the square of the distance so that
closer values will lend more weight to the calculation of the new elevation than closer distances.
There are many modifications of this approach some reduced the amount of the distance
calculations by employing a "learned search" approach, others modified the distance by weighting
factors other than the square. The barrier method is especially useful in the development of the
surface models that can account for these local objects. The interpolation cannot pass through the
barrier in its search for neighboring weights and distances. In general trends in Z surface rather
than in the exact modeling of individual undulations and minor surface changes. For example the

109
GIS Reader

general trend in population across a country, to support demographic research, or whether a


buried seam of coal trends toward the surface, to indicate how much overlying materials needs to
be removed for surface mining operations. The most common approach to this type of surface
characterization is called trend surfaces. In trend surface we use sets of points identified within a
specified region. The region is based on any of the methods already discussed for weighed
methods. A surface of best fit is applied on the basis of mathematical equations such as
polynomials and splines. These equations are best obvious that four numbers, spaced equal
distances apart, can be interpolated between 100 feet and 150 feet. By segmenting the distance
between these two points into five equal units, we can treat the distances as surrogates for change
in elevation. Therefore, at each unnumbered segment we need only insert a 10 foot elevation to
obtain the missing values. By drawing smooth lines to connect these segments, we can create
contours of 100,110,120,130,140 and 150 feet. In other words, we are able to create an isarithmic
map. Thus far, of course, we have worked with linear progression, assuming that the surface
changes in this linear fashion. At times however a series of surface values does not conform to
such a linear relation. In some cases the series is more logarithmic in others it is predictable only
for small portions of the surface.

Non linear interpolation techniques are designed to eliminate the assumption of linearity called
for in linear methods. There are three basic types of non-linear interpolation method:
• Weighting method
• Trend surfaces
• Kriging

Weighting methods assume that the closer together slope, sample values are, the more likely they
are to be affected by one another. For example, as we go up a hill, we note that there is much
greater similarity in the general trend in elevation values close to you than there would be if we
were to try to compare your local elevation to pint far away. Likewise, as we go downhill, there
will be similar change in elevation values for neighboring points, nearing the bottom of the hill,
however, we quickly notice that the elevation values changes rather quickly at the base of the hill,
whereas the plain beyond the hill once again takes on a certain similarity in elevational changes.
To more accurately depict the topography, we need to select points within a neighborhood that
demonstrate this surface similarity. This is done by a number of search techniques including
defining neighborhood by a predefined distance or radius from each point, auto corrected. If we
are hiking up a mountain, the topography changes in an upward direction between the starting
point and the summit: this is the drift.

Figure no- 4 : Non linear interpolation ; elements of kriging

But along the way, we find local drops denting the surface and accompanied by random but
correlated elevations. Along the way, we find boulders that must be stepped over, which can be
thought of as elevation noise because they are not directly related to the underlying surface
structure causing the elevational change in the first place. Elevation distance is measured with the
use of a statistical graphing technique called the semivariogram, which plots the distance between
samples, called the lags, on the horizontal axis: the vertical axis gives the semivariances, which is
defined as half the variance between each elevational value and each of its neighbors. As the

110
GIS Reader

distance between points increases there is a rapid increase in the semivariance, meaning that the
spatial dependency of values drops rapidly. Eventually a critical value of lag known as the range
occurs, at which point the variance levels off and stays essentially flat. Kriging is an exact method
of interpolation. Interpolation is most easily performed by isolating individual points and their
associated elevational values and converting them to an altitude point matrix described as non
linear progression that approximates curves or other forms of numerical series.

In Trends surfaces we use sets of points identified within a specified region. It can be relatively
flat, showing an overall trend for the entire coverage, or they can be relatively complex. The type
of equation used will determine the amount of undulation surface. The simpler trend surface
looks, the lower the degree it is said to have. For example, a first degree trends surface will show
a single plane that slopes across the coverage - that is, it said to be second -degree trend surface.

Figure no -3: Non linear interpolation; trend surface

Kriging the final method of interpolation, known as kriging, optimizes the interpolation
procedure on the basis of the statistical nature of the surface. Kriging uses the idea of the
regionalized variable, which varies from place to place with some apparent continuity but cannot
be modeled with a single smooth mathematical equation. Kriging treat each of these surfaces as if
it were composed of three separate values. The first, called the drift or structure of the surface,
treat the surface as a general trend in particular direction. In Kriging there will be small variation
from this general trend, such as small peaks and depression in the overall surface that are random
but still related to one another spatiality. Finally random noise that is either associated with the
overall neither trend nor spatial

Use of interpolation
Interpolation is a useful technique for creating isolines that describes the surface with which you
are working. It can also be used to display the surface as a fishnet map or a shaded relief map.
Trend surface interpolation technique will provide information about the thickness of the ore body
as it slopes across the subterranean surface. In addition if one may want to know about the quality
of the ore seam. Here a kriging technique would prove useful because it is the nature of ore bodies
to exist as regionalized variables.

Problems in interpolation
As there are number of methods in interpolation, while performing any of them, however four
factors need to be considered:
1. The number of control points
2. The location of control points
3. The problem of saddle points
4. The area containing data points.

111
GIS Reader

It's safe to say that more sample point we have, the more accurate the interpolation will be. The
number of control or target points is frequently a function of the nature of surface. The more
complex the surface the more data point we need but for important feature of particular interest
such as depression and stream valleys we should also place more data points to capture the
necessary detailed, although the location of sample points relative to one another has an impact on
the accuracy of interpolation. The problem of sample placement is even more severing when we
consider interpolation from data collected by area to produce an isoplethic map. When the data
points are relatively evenly distributed its easiest to used centriod -of -cell method and center of
gravity method in sample point.

The Saddle point problem some time also called as alternative choice problem arises when both
member of one pair of diagonally opposite "Z" forming the corners of a rectangle are located
below and both the members of second pairs lie above the value of the interpolation algorithm, a
simple way to handle this problem is to average the interpolation value produce from the
diagonally placed control point and then place this average value at the center of the diagonal.
The final problem that must be considered in interpolation is common one in GIS operation
involving the area within which the data points are collected. More especially for the
interpolation to work properly the data points that are to be estimated through a process of
interpolation must have control points on all sides. When we approach the map margin we see
that interpolation routine is faced with control points on the two or three sides of our unknown
elevation points because the map border precludes any data plans beyond the margin. The
interpolation results are obtained when we are able to search a neighborhood in all direction for
selection of control points and determination of weights. Some times this procedurals occurs
because surface data were not part of the original design sometimes because the study area as
selected on the basis of confines of a single map and sometimes because of time limitations.

112
GIS Reader

23 Surface analysis- dem, tin, slope, aspect, relief and hill


shading

Dem:
Digital Model of landforms data represented as point elevation values. TIN model, the basic
vector data structure for representing surfaces in the computer. However, the TIN model is one of
a number of methods of storing Z - value information, creating group of products collectively
called OEMs such methods are based either on mathematical models or on image models
designed to more closely approximate how they are normally sampled in the field or represented
on the paper. Although mathematical calculations are very useful, the currently available OEMs
are most often of image models of some description.
Image models of Z surfaces based on lines are nearly the graphical equivalent of the tradition
method of isarithmic mapping. In such cases models are produced by scanning or digitizing
existing contour lines or other isarithms. The purpose is to extract the form of the surface from the
lines that most commonly depict or describe that form. Once input the data are stored either as
lines entitles or as polygons of a particularly efficient to calculate slopes and aspects and to
produce shaded relief outputs from such data models it is more common to convert them to point
model from treating each point connecting each line segment as a sample location with an
individual elevation value. This procedure is known as a discreet altitude matrix. The discreet
altitude matrix is a point image method that represents the surface by a number of points each
containing a single elevation value.

Tin:
In raster, the geographic space is assumed to be discrete in that each grid cell occupies a specific
area. Within that discretized or quantized space, a grid cell can have encoded as an attribute the
absolute elevational value that is most representative of the grid cell. This might be the highest or
lowest value, or even an average elevational value for the grid cell. As such, the existing raster
data structures are quite capable of handling surface data. In vector however the picture is quite
different. Much of the space between the graphical entities is implied rather than explicitly
defined. To define this space explicitly as a surface, one must quantized the surface in a way that
retains major changes in surface
information and implies areas of identical elevation data.

Slope:
A common way of expressing slope is rise over reach, where rise is the change in elevation and
reach is the horizontal distance. The general method of calculating slope is to compute a surface
of best fit through neighboring points and measure the change in elevation per unit distance.
Specifically, the GIS will calculate the rise/reach value
through out the entire coverage, creating a set of categories of slope amount, much as we would
do when defining class limits. If we wish lower slope categories than are actually developed, we
can reclassify the set produced by the GIS. Although techniques designed to characterized
different neighborhoods by the amount of slope on a topographic surface are in common use, the
surface need not be a topographic one. Our idea of a surface can be generalized to apply to any
type of surface data that are measurable at the ordinal, interval or ratio levels, called a statistical
surface, which is a surface representation of these spatially distributed statistical data.

Both simple and complex method of


reclassifying neighborhoods best solely on slope can also be performed in raster GIS. The
simplest method is to use a search of the eight immediate neighbor cells of each target cell.This 's

113
GIS Reader

most often done by looking at all grid cells in the data base and examining their neighbor cells, so
that the slope values for the entire coverage can be performed. The software fits a plane through
the eight immediate neighbor cells by finding either the greatest slope values for the
neighborhood of grid cells or an average slope.

For each group of cells, the software uses the grid cell resolution as the measure of distance, and
then compares the attribute values from the central cells to the surrounding
cells.

Aspect:
Because surfaces exhibit slopes, these feature are, by definition, oriented in a particular
direction, called the aspect. The two concepts of slope and aspect are inseparable from physical as
well as an analytical perspective. Without a slope, there is no aspect. There are numerous
applications of this technique. For example, bio-geographers and ecologists are aware that there is
generally a noticeable difference between the vegetation on slopes that face north and slopes that
face south. The primary reasons for this differential entail the availability of sunlight to green
plants, but our interest in the phenomenon is that GIS will allow us to separate out north versus
south facing slopes for comparison to related coverage such as soil and vegetation.
Geologists frequently want to know the prevailing slopes of fault blocks, or exposed folds, as a
path to understanding the underlying subsurface processes. Or a grower may want to place as
orchard on the sunny side of a hill to be able to take advantage of the maximum amount of
sunshine. All these determinations and many more can be performed through the use of
neighborhood functions that classify sloping surfaces based on their aspect.

Relief:
The simplest method of visualizing surface form is to produce a crass sectional profile of the
surface. This is common practice in many courses in map reading, geography and geology, where
students are asked to render the profile of a topographic surface along a line drawn between two
points. This is done by transferring the each elevational value to a sheet of graph paper where the
horizontal is exactly the same width as the line between the points and the vertical axis is scaled
to some vertical exaggeration of the original surface elevation values.
Both surface form techniques, whether raster or vector, are designed to produce neighborhoods
based on changes in surface value that can be interpreted by the user to represent specific features.
Thus, ridges, channels, peaks, water shade and soon man need to be identified as specific
topographic features for later analysis.

Hill Shading:
The process called visibility and intervisibility analysis recognizes that if you are located at a
particular point on a topographic surface, there are portions of the terrain you can see and others
you cannot see. The generalized term for the process is viewshed analysis, where by one defines
the regions that are visible from a particular point in the terrain. In
Vector, the simplest method is to connect a viewing location to each possible target in the
coverage.
Viewshed analysis is frequently confined to determining areas that are visible to a single viewer.
This is the visibility portion of viewshed analysis. However, there maybe situations where you
wish to not only know how much one can see from a particular vantage point but also to
determine how much of the terrain is visible from another's perspective, or intervisible. In military
applications, for example, you want to know whether your location is visible from possible enemy
positions. To do this involves the same method of ray tracing as before, but it will often have to
be performed once for each viewer location.

Raster methods of intervisibility operate in much the same way, but they are less elegant and
more computationally expensive. The process begins by defining a viewer cell as a separate

114
GIS Reader

coverage against which the elevation coverage will be tested. Starting at the location of the viewer
cell, the software evaluates the elevation that corresponds to that location. Then it moves out in all
directions, one grid cell at a time, comparing the elevation values of each new grid cell it
encounters with the elevation value of the viewer grid cell.
Most applications of intervisibility are based solely on topographic surfaces, but in some cases the
topographic surface will have forest cover with known individual heights or grouped heights
associated with the trees. To perform intervisibility where the heights of these or other obstructing
objects are known, the elevation coverage values must include the obstruction heights. These can
added in both vector and raster, usually by means of a mathematically base (addition)
combination of the two coverages.

115
GIS Reader

24 Spatial Data Model: Space, Layers, Coverages and Time


Introduction :-
In urban planning and design one often needs the assistance of maps to determine and
evaluate a variety of features on the earth’s surface. These features include, but are not limited to,
settlement patterns, locations of roads, streams, rivers, large and small water bodies, the
topography of the land, extent and density of forests, and location and size of agricultural plots.
Since, over time, these are constantly changing based on increasing population, movement and
shifts in the earth’s surface, changes in agricultural techniques and planting patterns, so too must
the maps change to reflect these shifts. The current method for recording features and phenomena
(Spencer 2003, p 90) is through Geographic Information Systems or GIS which uses a database
management system. The database uses the latest mapping technology to create accurate, updated
resources. In order to record targets on the earth’s surface, however, one must determine what
and how they are documented.

Proper preparation in advance of the documentation process is important. Because


updates of maps occur through field work, researchers must plan and prepare for this work by
assessing the required needs of the output, “assembling all existing maps, digital data, lists of
landscape features and phenomena that are required to answer the research questions.” (Spencer,
et al 2003) This is creating an area of interest. The area of interest limits the scope of the project
or creates a spatial boundary for the research.

The data GIS uses is divided into two groups: spatial data and attribute or descriptive
data. GIS is the link between these two types of data. Spatial data is the way an object exists as a
natural, physical entity. It deals with location, shape and relationship with other objects. The
attribute data is descriptive data which deals with the features that are represented in the spatial
data and is qualitative information assigned to the object. For example, it could be a descriptive
name and other non-visual information about the area of interest recorded as spatial data. Both of
these combine to create our understanding of a cartographic image and thus assist in the formation
of a variety of maps. In the database system, these become key identifiers and allow the
researcher to access to specific features and link certain features to certain objects.

Processing and storing of information is another part of the mapping process. There are
two ways to process the spatial data. One is the raster method that records points, lines and areas
through a matrix. The second is the vector method which uses Cartesian coordinates to save
points that become lines, polygons, and 3-dimensional objects and volumes. Each type of model
is stored as ‘themed’ data sets. These data sets contain groups of layers that are ‘themed’ together
with specific mapping information. The goal of data sets is to allow the end-user to access
information quickly and easily.

Types of Spatial Data Models - Raster and Vector Models


The first type of spatial model is a Raster model. Here, a point, line or an area of the
‘phenomena and features’ is represented and understood through a matrix, which are a series of
boxes. Each feature is represented with a particular number of boxes each having a given value.
In practice the area under study appears to be covered by a fine mesh or matrix or grid of cells. A
particular ground surface attribute value occurs at the center of each cell point is recorded as the
value for that cell. Each cell in turn can be individually selected to provide information about the
geographic and visual data elements (Panda 2005).

This spatial data model is not continuous and is divided into smaller units located in
space. The image resolution is determined by the size of each cell, its coordinates, the overall

73
GIS Reader

grid size (i.e. the number of rows and columns grouped together) determines the quality of the
raster model. Because the computer system is storing the entity as a mesh and not multiple
points, the Raster computer storage files are often smaller than vector files.

The second type of spatial model is called a Vector model. In this type the ‘phenomena
and features’ in the area of interest are represented through points, lines, polygons and 3-
dimensional objects. The points are graphed as Cartesian co-ordinates or (x,y) and (x,y,z). A line
is a two points joined together; a polygon is a string of co-ordinates with the same starting and
ending point; 3-dimensional (3-D) objects are typically polygons joined at a variety of points with
lines. The 3-D entities include settlements, mountains, deep ravines, deep soil types, etc. In
today’s world when a researcher is in the field creating or updating maps, the points he or she are
locating are found using GPS (Global Positioning System) coordinates. Descriptions of the GPS
uses satellite communication to identify one’s exact location in the world.

Of the two types of spatial models, the vector model is more precise with the details of
the entities stored within it. It is advantageous in terms of resolution and ability to store the
changes that happen over time; however, it requires a more complex and ‘robust’ computer
system in which to store the information for calculating the displays (Embley and Nagey 1991).

Layers and the Layering Process


As stated previously, GIS is a database. As such, each item or feature has unique values
assigned to it. The way these assignments occur is two fold. An area of interest is defined by a
researcher. Then the landscape is divided into thematic layers (Panda 2005). Each layer has
objects that are stored as x,y coordinates and attributes. The coordinates and attributes are stored
in two different locations but linked through the database so they may be accessed (Panda, 2005).
Each layer represents a separate classification of similar features. For example in a
natural forest area, the primary layers would be geology, soils, vegetation, topography, hydrology
and so on. In an urban environment, the primary layers are buildings, streets, land use, zoning and
other political and administrative units. The organization of these is an important process in the
preparation of a spatial data model. It builds one’s understanding of the area of interest and
stratification of information leads to a better analysis of the real world.

The point of creating the layering system is that different users have different needs and
different points of control. A planning department may need to know ownership of a plot of land
or the location of a park within a city and it’s level of plantation. But the streets department may
need the center line of a road or the size of a sewerage line below the road. The data base can be
created that meets the needs of both. The combination of an organized layering system coupled
with thematic data or layer sets can meet the needs of both without overlapping of information,
unless requested.

Coverages within Vectors and Raster Models


Coverages are abstractions of similar kinds of features from objects found in the real
world that lie in a particular area together. The abstractions are the two-dimensional features
previously mentioned: point, line, polygon. For example, light poles make up a variety of points
so a collective of all the light poles is a point coverage. A network of roads or rivers is a
particular kind of feature which would be a line coverage. A homogeneous surface like a dense
forest or a grouping of agricultural plots is a polygon coverage.

The storage of the coverages is the same as previously described. There are two types of
data that are a part of the coverage: spatial data and attribute data. The discreet definitions help
the computer file and retrieve information for the user.

74
GIS Reader

Coverages are determined, defined and collected by a researcher who must understand the
overall physicality of the landscape and type of computer program being used to store the data.
This person looks to previous database information, maps and historic cartographic information to
evaluate the requirements for documentation.

Space – Time Analysis:


In recording cartographic information, time is an important aspect. It is considered the
fourth dimension of a recorded 3-dimensional object (Landgran 1992). The examination of
spatial data models over a sequence of data layers through time can be used to understand changes
in a particular geographical area. This analysis is used to understand changing perception of
trends of particular characteristics of a particular space. For example, examination of
deforestation between two time periods would constitute an analysis of change. Another is the
visualization of the growth of a population center during a period of industrialization. While the
search of evidence of global warming over the past half a century constitutes a time series
analysis. This analysis is based on quantitative and qualitative change as one is the difference
through time in kind; the other is the difference through degree in time.

Time as a cartographic entity must be delineated. As mentioned in previous sections,


researchers must decide what is to be observed and at what discrete moment they need to record
it. For example, earthquakes and floods often change the location of geographic entities. Cities
may change. Shifts in the earth’s plates may change the ridge line of a mountain or feature in the
ocean. A specific time for documentation can be selected before and after the event to create a
time based comparison. In this way, we as planners can view and analyze the GIS database to
make predictions about human activities and the natural phenomenon that may effect us.

Conclusion
The spatial data modeling system is a powerful data base tool that is useful in a variety of
applications. Creating a logical layering system assists the user in obtaining specific data that
may be required. Cartographers, urban planners, city engineers, architects, forest officials, to
name only a few, are the professionals that can benefit from accurate recording of areas on the
globe.

The GIS database becomes more accurate as technologic tools and equipment are used to
understand the attributes of phenomena and features on the earth’s surface. The use of previous
documentation allows us to understand our past and potentially predict future events and trends.

75
25. Spatial data- Representation of Geographic Features in
Vector and Raster Models
What is geographic data?

Geographic data are a special form of spatial data that is characterized by two crucial properties as
follows:

• The one registered to a geographical co-ordinate system and refered to as geographic


space.

• The other that are normally recorded at relatively small scales and refered to as
geographical scale. As defined in the “Concepts &techniques of GIS” by Albert
K.WYeung.

Therefore the geographic data in short is a component of GIS that record locations and
characterstics of natural features or human activities that occur on or near earth surface.

The geographic data are categorized into three distinct types mentioned below:
• The geodetic control network

This provides a geographical frame work whereby different sets of geographic data can be
cross-referenced with one another. It is also the foundation of all geographic data.

• The topographic base and

This is normally created as the result of a basic mapping programme. It can be obtained
usually by using photogrammetry.

• The graphical overlays

These are thematic data pertaining to specific GIS applications. It can be derived directly
from the topographic base.

The geographic data within the digital database are represented by three forms

a) Vector –depicted by points, lines and polygons,

b) Raster- depicted by attribute values or grid of cells with spectral values

c) Surface – depicted by set of selected points or continuous lines.

In summary, types of data can be classified as follow:

• Spatial data – This represents features that have a known location to the earth.

• Attribute data – This is information linked to the geographic features (spatial data) that
describe those features.

• Data layers – These are results of combining spatial data and attribute data. Meaning
addition of data base to the spatial location.
• Layer types – These refers to the way spatial and attribute information are connected.
There are two major layers type i) Vector and ii) Raster.

• Typology – This refers to how geographic features are related to one another and where
they are in relation to one another.

Level of measurements:

The level of measurement is necessary for classification of data, measurements may be conducted
for a point, line and polygon features.

Attribute data can be classified in four levels of measurement as below:

o Nominal – This is the lowest level of measurement in which data can only be
distinguished qualitatively, such as vegetation or soil type.

o Ordinal- This level of data can be ranked into hierarchies. Eg stream order or city
boundaries.

o Interval – This level of measurement indicate distance between the ranks of


measured element, where the starting point is assigned arbitrarily. Eg.
Temperature is measured in degree Celsius where 0 (zero) is an arbitary value.

iv) Ratio – This is the highest level of measurement. It includes an absolute starting point. Eg.
Property value and distance.
26 Data products: Data formats, Ground segment
organization, Data product generation, referencing scheme
Gis Data
(1) Entity (spatial data ):

• a /a line/an area
• where things are data
• Ex. TAJMAHAL, a monument
• In Agra has reference in Terms of a latitude and Longitude.
• Special database structure is required to store the data
• Spatial entity types have the basic topographical properties of location dimension
and shape

(2) Attribute (a spatial data):


• what things are data
• Data about the real world Data about the real world feature of Tajmahal like
History, Dimensions, Plan etc
• Conventional database structure can store.
• Don’t have location.

DBMS :-
It is a computer program to control the storage, retrieval and modification of data ( in a
data base )

Functions of DBMS:
• File Handling & management (creating, modifying or deleting database structure)
• Adding, Updating and deleting RECORDS
• Extraction of information from data.
• Maintenance of data security and integrity .
• Application building.

GIS Management field :-


Two types of distinct data are important.
1. Logical data : The way in which data Appear to a user
2. Physical data : Details of data organization as it actually appears in memory Or
on a storage medium.

Functions of DBMS :-

1. Security
2. Integrity
3. Synchronization
4. Physical data independence
5. Minimisation of redundancy
6. Efficiency
Components of DBMS :-
The interaction with database system is to perform the following broad types of tasks.
• Data definition
• Storage definition
• Database administration By user
• Data manipulation

GIS Data File Management :-


(a) Simple list b) Ordered Sequential files c) Indexed files

Building GIS Worlds :


• LCGU - Least Common Geographical Unit
ITU - Integrated Terrain Units
• Layer Based
• Future Based
• Object Oriented

Data in GIS:
Digital image data: - Original 320 row * 480 Column
Enlargement shows: - 1 20 row * 30 Column PIXLES
2 10 row * 15 Column PIX.ES

Digital numbers corresponding to the Radiance of each pixel shown in table

Reference Data:-
( Ground Truth ) Used to serve the following purpose
1. To aid in the analysis and interpretation of remotely sensed data
2. To calibrate a sensor
3. To verify information extracted from remote sensing data

Two Primary approaches to represent the locational component of geographic


Of information (a) a Raster (grid cell ) (b) Vector ( Polygon format )

Raster Data Formats:


Advantages:
• Simpler Data Structure
• Grater computational
• Efficiency in such operations
• Like overall analysis.
• Represents features having High spatial variability and/or “blurred boundaries”
eg. Between pure and mixed vegetation zones more effectively.
Dis-Advantages:
• Due to limited size of the cells Comprising the raster, and the topographical
relationships among spatial features are more difficult to represent
• Less efficient
Vector Data Formats:
Advantages:
• Relatively lower data volumes
• Better spatial resolution
• Preservation of topographical data relationships
• Making network analysis more efficient
Dis-Advantages:
• Complex computationally data structure
• Low computational efficiency
• Zoning features don’t represents effectively
• Overlay analysis are more complex

Digital remote sensing images are collected in raster format. Accordingly digital Images
are inherently compatible spatially with other sources of information in a raster , domain
‘Raw ‘ images can be easily included directly as layer in a raster based GIS.

Overlay of Images between Raster data format and Vector data format can be done.
Raster images can be displayed as a backdrop for a vector overlay to the image.

GIS supports conversation between raster and Vector formats as well as the simultaneous
integration of raster and vector data.
27. Spatial data-Concept of Arcs, Nodes, Vertices &
Topology
Data Models available for Spatial Data Modeling in GIS:
• Computer-Aided Design (CAD)
• Graphical
• Image
• Raster
• Vector
• Network
• Triangulated Irregular Network (TIN)
• Object-Oriented

1.1. Data Model Structure in Brief


GI

Spatial Data Attribute Data

Access
Access
Vector Raster
DBase
DBase
Grid

Non- Other DBs


Topological
Topological IDRISI

Shapefile High level


Shapefile Simple Data GeoDataBase
Data Models GeoDataBase

Dynamic
Coverage
Coverage TIN Regions Object Oriented
segmentation
A schematic of data models; After Chang, 2002.

2. Data Models
2.1. Raster
Represent earth’s surface and objects on it with uniformly shaped cells or pixels of
the same size
Divides space into two-dimensional array (length and breadth)
Space filling approach, each cell has value
Typically square, but not necessarily
Use a common ground dimension for cells
Must have a projection (Otherwise gaps and overlaps)
Topology implicit – by virtue of cell layout and variability between cells
Location within layer defined by row and column starting in upper left hand corner
with 0,0 NOT 1,1
Georeferenced in real world coordinates (usually in a header file)

Header file also has number of rows, columns, cell size, and more metadata
Ground distance and area calculated from cell size
Attributes are represented by value within a cell, one value per cell,
Several attributes can be tied to a cell with a values attribute table

Representation of typical raster data

Representation of spatial objects


Great for continuous variable or “field” and 3 dimensional
But can do 0,1,2 dimensional discrete objects
Point – represented by single cell
Lines – group of cells – no smooth edges “jaggies”
Areas – clustered group of cells – “jaggies”

2.1.1. Advantages of Raster Model


Simplicity
Simple concept and implementation
Easy to perform analysis
Relatively inexpensive
Spatial index is implicit with every cell
Similarity to concept of fields
Good for modeling surfaces
Easy to incorporate remotely sensed data
2.1.2. Disadvantages
Data storage demands
Cell size, Array size, Compression
Cell based product
Less visual appeal
Loss of spatial detail
Spatial analysis issues
Relationship with cells beyond neighborhood difficult
Issues with some location operations – resolution

3. Vector Data Model


• Represent earth surface and objects on it within “edges” between changes
– Points (nodes), lines (arcs) and polygons (areas)
– Great for discrete objects 0-2 dimensions, but can do continuous or “field”
(and more dimensions) with appropriate attributes and representation
– Divides space within subdivisions
• But what about real life? Aha, generalization
– Not space filling
– Requires spatial index – id and spatial location of features
– Builds on points (nodes and vertices) with id and coordinates
• Arcs are series of points, arcs have id
• Polygons are series of points and arcs that are enclosed, polygons
have id

3.1. Representation of Spatial Objects


One may need to decide on dimensionality before picking model and feature; viz. Point,
Line or Area.

3.2. Representation of Spatial Objects


3.2.1. Point

Co-ordinate representation:- 1: (13,32)


From here, lots of attributes can be added.

3.2.2. Lines

Co-ordinate representation:- 1: (6,32) (22,28) (32,16)


Attributes can be added here. Node, vertex, node (depending upon the purpose can be
split into arc with a node and have two line segments)
3.2.3. Areas

Co-ordinate representation:- 1: (6,29) (13,33) (30,26) (31,17) (24,9) (6,17) (6,29)


From here attributes can be added but in some structures may also require a label point for
attribution.

3.3. Assigning Values


• Your call for feature type based on what you’re trying to do
• But you should still have decision rules to be consistent
• Think of Generalization or Cartographic Abstraction before assigning values
• How far do you want to go?
• More nodes, more features more memory, more time, and more money.
Assigning Values Assigning Values – Vector Assigning
Values – Vector

3.4. Types of Vector Data Structures


• Non-topological structures
– Simple structures
– Shapefiles
– Some graphical programs too
• Topological structures
– General concept
– Specific examples

3.4.1. Types of Vector Data Structures – Non Topological Structures


3.4.1.1. Simple Structures – Spaghetti
Characteristics
Features are unrelated points, lines or polygons
Features can overlap
Benefits
Simplicity- Good for graphics/presentation
Disadvantages
Very limited spatial analysis
Inefficient for fields – need to create two arcs for contiguous polygons
Permits overlapping – could violate rules of your data layers
“Slivers” – a real problem – where polygons overlap
3.4.1.2. Shapefiles
Non-topological structure
Store features as geometric shapes
Data stored in as many as 11 files
But requires three:
.shp stores the shape as list of verticies (in binary code)
.shx stores the index of the shape for locating values
.dbf stores the attributes
Lots of shapes to choose from with different dimensional options and
combinations
Advantages
Fast processing, easy to create
Disadvantages – the problems of non-topological structure – overlap, slivers, too
simple (no network analysis capability)
Also depending upon size of project, may be more efficient to do
topological coverage – my anecdotal evidence

3.4.2. Types of Vector Data Structures – Topological Structures


3.4.2.1. Topological Structures
• Characteristics
– Simple features with topologic rules
• Topology
– Math and science of “what is next to what”
– Used to validate geometry – great for error correction – nodes must snap
together, polygons must be enclosed and not “leaky”
– Provides information about connectivity and adjacency (contiguity)
– Provides analysis functionality – network and adjacency questions
• Attribute information
– Typically stored in associated database tables
3.4.2.2. Topological Data Structures
• Arc-node – This is ArcINFO’s structure
– “Coverage” made up of pointers and tables
• Separate INFO folder and Coverage folder both within a
workspace (a special directory)
• Others
– GBF/DIME
• Geographic Base File/Dual Independent Map Encoding
– DLG
• Digital Line Graph
– TIGER
• Topologically Integrated Geographic Encoding and Referencing
System
3.4.2.3. ArcINFO’s Arc-Node Topology:
Nodes connect the ends if arcs, vertices define the curve the arcs
V6
7
V1 N1
V5
6 B 4 D 6

5 1
N4 V4
C 5 3
4 V3
Y N5

3 N2 N3
2

2 V2
A
1
0
0 1 2 3 4 5 6 7
Node X V=Vertix
Vertex N=Node
Arc, 1 Arc # A= Polygon
Representation of Topological Structure
From To Left Right
Arc
# Node Node Poly Poly
1 N1 N2 B A
2 N2 N3 B A
3 N3 N4 B D
4 N1 N4 D B
5 N5 N5 B C
6 N1 N3 A D
Arc
List of Nodes and Vertices
#
1 N1@4,6 V1@2,6 N2@1,3
2 N2@1,3 V2@3,2 N3@5,3
3 N3@5,3 N4@5,5
4 N1@4,6 N4@5,5
5 N5@2,4 N5@2.4
6 N1@4,6 V6@6,7 V5@7,6 V4@6,5 V3@6,4 N3@5,3

3.5. Advantages of Vector Model:


Low storage demand compared to raster
Similarity to concept of objects
Good for simple object based themes
Appealing cartographic products - intuitive
Correspondence with object dimensionality as often shown on maps
Spatial analysis issues
Allows for relationship beyond neighbor as in raster

3.6. Disadvantages of Vector Model:


Complexity
Implementation
Requirement for explicit spatial indexing
Spatial analysis, less intuitive than overlays in raster
Limitations
Surface modeling – except TINS, not very good
Difficult to use raster data…doesn’t match well
Less suitable for fields, continuous
Expense
Technology required for output and display
Complexity can lead to high storage demands

4.Topology

The topologic data structure is often referred to as an intelligent data structure because
spatial relationships between geographic features are easily derived when using them.
Primarily for this reason the topologic model is the dominant vector data structure
currently used in GIS technology. Many of the complex data analysis functions cannot
effectively be undertaken without a topologic vector data structure.

The secondary vector data structure that is common among GIS software is the computer-
aided drafting (CAD) data structure. This structure consists of listing elements, not
features, defined by strings of vertices, to define geographic features, e.g. points, lines, or
areas. There is considerable redundancy with this data model since the boundary segment
between two polygons can be stored twice, once for each feature. The CAD structure
emerged from the development of computer graphics systems without specific
considerations of processing geographic features. Accordingly, since features, e.g.
polygons, are self-contained and independent, questions about the adjacency of features
can be difficult to answer. The CAD vector model lacks the definition of spatial
relationships between features that is defined by the topologic data model.

3.7. Topology (as per ESRI)


A GIS topology is a set of rules and behaviors that model how points, lines, and polygons
share geometry. For example, adjacent features, such as two counties, will share a common
edge.

Shared Boundary between Dallas and Rockwall Counties in Texas


This illustration shows how a layer of polygons can be described in two ways: (1)
collections of geometric features and (2) a graph of topological elements (nodes, edges,
faces, and their relationships).

This means that there are two potential methods used when working with features—one in
which features are defined by their coordinates and another in which features are
represented as an ordered graph of their topological elements.

3.8. Why Topology?


Topology is employed to
• Manage shared geometry (i.e., constrain how features share geometry). For example,
adjacent polygons, such as parcels, share edges; street centerlines and the boundaries of
census blocks share geometry; adjacent soil polygons share edges.
• Define and enforce data integrity rules (e.g., no gaps should exist between parcel features,
parcels should not overlap, road centerlines should connect at the endpoints).
• Support topological relationship queries and navigation (e.g., have the ability to identify
adjacent and connected features, find the shared edges, and navigate along a series of
connected edges).
• Support sophisticated editing tools that enforce the topological constraints of the data
model (e.g., ability to edit a shared edge and update all the features that share the common
edge).
• Construct features from unstructured geometry (e.g., the ability to construct polygons
from lines sometimes referred to as "spaghetti").

3.8.1. Historical Topological Data Model Example: The ArcInfo Coverage


ArcInfo® coverage users have a long history and appreciation for the role that topology
plays in maintaining the spatial integrity of their data.
Elements of the ArcInfo Coverage Data Model

In a coverage, the feature boundaries and points were stored in a few


main files that were managed and owned by ArcInfo Workstation. The ARC file held the
linear or polygon boundary geometry as topological edges, which were referred to as
"arcs." The LAB file held point locations, which were used as label points for polygons or
as point features such as a set of points representing oil well locations. Other files were
used to define and persist the topological relationships between each of the edges and
polygons. For example, one file called the Polygon-arc list (PAL) file listed the order and
direction of the arcs in each polygon. In ArcInfo, software logic was used to assemble the
coordinates for each polygon for display, analysis, and query operations. The ordered list
of edges in the PAL file was used to look up and assemble the edge coordinates held in the
ARC file. The polygons were assembled during runtime when needed.

3.8.1.1. Advantages:
• It used a simple structure to maintain topology.
• It enabled edges to be digitized and stored only once and shared by many features.
• It could represent polygons of enormous size (with thousands of coordinates)
because polygons were really defined as an ordered set of edges (or arcs).
• The topology storage structure of the coverage was intuitive. Its physical
topological files were readily understood by ArcInfo users.

3.8.1.2. Disadvantages:
• Some operations were slow because many features had to be assembled on the fly
when they needed to be used. This included all polygons and multipart features
such as regions and routes.
• Topological features (such as polygons, regions, and multipart lines called
"routes") were not ready to use until the coverage topology was built. If edges were
edited, the topology had to be rebuilt. (Note: Partial processing was eventually
used, which required rebuilding only the changed portions of the coverage
topology.) In general, when edits are made to features in a topological dataset, a
geometric analysis algorithm must be executed to rebuild the topological
relationships regardless of the storage model.
• Coverages were limited to single-user editing. Because of the need to ensure that
the topological graph was synchronized with the feature geometries, only a single
user at a time could update a topology. Users would tile their coverages and
maintain a tiled database for editing. This enabled individual users to "lock down"
and edit one tile at a time. For general data use and deployment, users would
append copies of their tiles to a mosaicked data layer.
28. Spatial data-computer representation for storing
spatial data.
Spatial data:-
Databases
• A database is like a storehouse which is capable of storing large amounts of data. It comes
with a number of useful functions:
• It can be used by several users at a particular point of time – i.e., it allows concurrent use
• It offers a number of techniques for storing and allows to use the most efficient one- i.e.,
it supports storage optimization
• It allows to force rules on the stored data, which will be automatically checked after each
update to the data- i.e., it supports data integrity
• It offers an easy to use manipulation language, which allows to perform all sorts of
drawing out of the data and data updates- i.e., it has a query facility
• It will try to execute each query in the data manipulation language in the most efficient
way-i.e., it offers query optimization

Spatial databases
Spatial databases are a specific type of database. They store representations of geographic
phenomena in the real world to be used in a geographic information system. The spatial data is
different in the sense that they use methods other than tables to store the representations. This is
because it is not easy to store and represent the geographic information using tables. A spatial
database is not the same as a GIS, although both have some common characteristics. The spatial
data is concentrated on the functions mentioned above. While a GIS, is concentrated on the
operations of the spatial data which requires better understanding of the geographic space. The
spatial data to be stored can consist of point, line, area or image. Different storage and
compression techniques exist for each of them.
Computer Representation of spatial Data
A computer must be instructed exactly how spatial patterns should be handled and displayed.
There are two formats:
Vector
Grid cell or raster

Vector
With the vector format a set of lines, defined by start and end points as well as some form of
connectivity, completely represent an object.

Raster
With the raster format a set of points on a grid clearly represent an object and the computer
assigns a common code (symbol or color) to each cell.
Both the formats have certain disadvantages and certain advantages. There is no unique
connection between the vector and raster structure of a geographic database. Also, in GIS, a
combination of both the formats is used.
Raster data structure
The raster data structures consist of an array of grid cells or pixels referenced by a row and
column number and containing a number representing the type or value of the parameter being
mapped. The 2-dimensional surface via which the geographical data are linked is not continuous
and this can have an important effect on the estimates of lengths and areas when grid cell sizes are
large with respect to the features being represented. In the raster format, a range of different
methods is used to encode a spatial data entered in order to store and represent. There are four
methods in which compact storage can be achieved:
Chain codes
Run-length codes
Block codes
Quadtrees

The actual facts of a situation, without errors introduced by sensors or human perception
and judgment. For example, the actual location, orientation, and engine and gun state of
an M1A1 tank in a live simulation at a certain point in time is the ground truth that could
be used to check the same quantities in a corresponding virtual simulation.
Data collected on the ground to verify mapping from remote sensing data such as air
photos or satellite imagery.
To verify the correctness of remote sensing information by use of ancillary information
such as field studies.
In cartography and analysis of aerial photographs and satellite imagery, the
ground truth is the facts that are found when a location is field checked -- that is,
when people actually visit the location on foot.

Chain codes
Chain codes can be known as a boundary or a border code and is used in cartographic applications
since they work by defining the boundary of the data. The chain code of a region is specified with
a reference to the starting point and with a sequence of unit vectors in a way that the interior
region remains towards the right of the vectors. The directions can be represented by numbers.
Chain codes with more than four directions can also be used .chain codes are not only compact
but they can simplify the detection of features of a region boundary but on another hand they do
not exhibit the properties of elongatedness and set operations such as union and intersection as
well.
Pictures showing ground truth of a satellite image with respect to person measuring on ground

Run length codes


This method reduces the data on a row by row basis. It stores only one value where there are a no.
of cells of a given type in a group, instead of storing one value for each cell individually.
The example below shows a hypothetical vector soil map having five polygons that have been
assigned one of three possible soil types (color themed). A 16 by 16 grid, having cells that are 20
by 20 map units, has been superimposed to represent the polygon boundaries and the areas they
enclose in raster format. These cells are referenced by row and column number and a Z-value. To
further condense these data, rows of cells having the same thematic value are scanned from left to
right and stored as "runs". A "run" is denoted by a beginning and ending cell (column #) and the
common thematic value. These "runs" are displayed in the fourth figure.
Raster soil map Raster soil map

Z Value (theme) Assignment Run length encoded soil map

Note: As can be seen above the grid cell or pixel size greatly affects the amount of detail that is
preserved in converting from vector to raster format. Also area and perimeter calculations will be
map altered.

Block codes
This method is an add on to the run length encoding method by making it two-dimensional by
using a sequence of square blocks to store data. The data structure consists of the origin (center or
bottom left) and side length of each square. This method is also called medial axis transformation
(MAT).
Quadtree
One of the benefits of the raster model is that each cell can be subdivided smaller cells of similar
shape and direction. This unique feature of raster model has lead to development of several
innovative data storage and representation methods that are based on regularly subdividing the
space. Quadtree is a commonly used technique based on recursive decomposition of space its
development has been noted to a large extent by a desire to save shortage by aggregating data
having similar or identical values. The saving in the aggregation time that arises from this
aggregation is of great importance. The lowest limit of division here is the single pixel. This leads
to a tree structure of degree 4 because each node has 4 branches, namely the NW, NE, SW, and
SE quadrants.
29Non spatial data- RDBMS, concepts, components, Database
scheme, Relationship-one to one, one-to-many
Definition:

Non spatial information about a geographic feature in a GIS, usually stored in a table and linked
to the feature by a unique identifier. For example, attributes of a river might include its name,
length, and sediment load at a gauging station.

In raster datasets, information associated with each unique value of a raster cell.

Information that specifies how features are displayed and labeled on a map; for example, the
graphic attributes of a river might include line thickness, line length, color, and font for labeling.

In MOLE, aspatial information about a geographic feature in a GIS, usually stored in a table and
linked to the feature by a unique identifier. For example, attributes of a force element might
include its name and speed. Most MOLE attributes are what some military specifications refer to
as labels or modifiers.

RDBMS (Relational Data Base Management System):-

1. Its a type of database management system (DBMS) that stores data in the form of related
tables. Relational databases are powerful because they require few assumptions about
how data is related or how it will be extracted from the database. As a result, the same
database can be viewed in many different ways.
2. An important feature of relational systems is that a single database can be spread across
several tables. This differs from flat-file databases, in which each database is self-
contained in a single table. Almost all full-scale database systems are RDBMS's. Small
database systems, however, use other designs that provide less flexibility in posing
queries.

Concept:-

Two important pieces of RDBMS architecture are the kernel, which is the software, and the data
dictionary, which consists of the system-level data structures used by the kernel to manage the
database
You might think of an RDBMS as an operating system (or set of subsystems), designed
specifically for controlling data access; its primary functions are:-
• Storing, retrieving, and securing data.
• An RDBMS maintains its own list of authorized users and their associated privileges.
• Manages memory caches and paging.
• Controls locking for concurrent resource usage;
• Dispatches and schedules user requests;
• Manages space usage within its table-space structures.

Name the sub-systems of a RDBMS:-


I/O, Security, Language Processing, Process Control, Storage Management, Logging and
Recovery, Distribution Control, Transaction Control, Memory Management, Lock Management.
You communicate with an RDBMS using Structured Query Language (SQL).

Components:-
1. The Database Server:-
This takes the SQL, decides how to execute it, with a sub-Component called the Query
Optimizer, and produces a Query Execution Plan.
It is possible to have many Database Server processes running
simultaneously, with each one tailored to a particular kind of "SQL Query".

2. An Archive Process:-
This writes completed Transactions onto a Journal or History File and deletes them from the
Log File. This is done to avoid the Log File getting filled up because then everything fails and
the Servers have to be brought own to recover.
This is an embarrassing process. The worst part is that as the DBA, you often do not know
that the Archive process is not running ntil the Log File fills up, no more transactions can
start, everybody program hangs and the phone rings off the hook.

3. A Recovery Process:-
The Recovery Process handles the situations where is there is a Database crash and it recovers
to the last known point at which the Database was running OK and had 'integrity'.
In other words, all the data representing a consistent set of related records had been written to
the Database at the end of a committed Transaction, with no open Transactions.

Database scheme:-
To define the database schema used by the RDBMS security realm:

• In the left pane, expand Compatibility Security > Realms and click the name of the
RDBMS security realm.
• Under Configuration > Schema for the RDBMS security realm, define the schema used
to store Users, Groups, and ACLs in the database in the Schema Properties box. The
following code example contains the database statements entered in the Schema
properties for the RDBMS code example shipped with WebLogic Server in the
/samples/examples/security/rdbmsrealm directory.

Enter, or select from the combo box drop down list, the Java package name that the generated
classes will belong to, or leave blank for no package. If necessary, enter, or select from the combo
boxes, the catalog and schema where the tables are located. You may select a predefined search
pattern from the Catalog, Schema, and Table pattern combo boxes, or enter your own search
pattern. A table search pattern allows you filter the tables displayed based on the table names. The
Table type option allows you to specify whether to display only tables, only views, both tables
and views, or all table-like objects. The Available list automatically displays the names of all
tables found that match the search criteria.
Catalogs and schemas refer to the organization of data in relational databases, where data is
contained in tables, tables are grouped into schemas, and schemas are grouped into catalogs. The
terms catalogs and schemas are defined in the SQL 92 standard but are not applicable to all
databases. (It is important to note that term schema as used in this section does not refer to the
same 'schema objects' that the mapping tool manipulates.) For example, in desktop databases
such as MS Access there are no such concepts. Also, many databases use slightly different
variations of these terms. For example, in SQL Server and Sybase, tables are grouped by owner,
and catalogs are databases. In this case a list of database names in shown in the catalogs field, and
a list of table owners in the schemas field. It is also very common that the owner of all tables is
the database administrator, so if you do not know the actual owner name, select 'dbo' (under SQL
Server or Sybase), or the actual name of the database administrator.

The following are predefined search patterns that can be selected from the Catalog, Schema, and
Table pattern combo boxes drop down lists:

• [N/A]: Not Applicable. This is the default entry. It means to drop the item from the search
criteria when getting a list of tables. This is usually the best setting for databases for
which the concept of a catalog and/or schema does not apply, such as MS Access.
• [All Catalogs/Schemas/Tables]: Searches for all tables under all catalogs and/or
schemas.
• [No Catalog/Schema]: Searches for all tables that do not belong to a catalog and/or
schema.
• [Current Catalog]: Searches for all tables in the catalog corresponding to the current
connection. This is usually the best setting for databases for which a catalog is
synonymous with a database, such as SQL Server. This entry is only available in the
Catalog combo box.

Relationships:-
You have a 1-to-1 relationship when an object of a class has an associated object of another class
(only one associated object). It could also be between an object of a class and another object of
the same class (obviously). You can create the relationship in 2 ways depending on whether the 2
classes know about each other (bidirectional), or whether only one of the classes knows about the
other class (unidirectional). These are described below.
The various possible relationships are described below.
• Unidirectional (where only 1 object is aware of the other)
• Bidirectional (where both objects are aware of each other)
• Unidirectional "Compound Identity" (object as part of PK in other object)

Unidirectional:- For this case you could have 2 classes, User and Account, as below.
so the Account class knows about the User class, but not vice-versa. If you define the Meta-Data
for these classes. This will create 2 tables in the database, one for User (with name USER), and
one for Account (with name ACCOUNT and a column USER_ID) as follows:-

Things to note :-

• Account has the object reference (and so owns the relation) to User and so its table holds
the foreign-key
• If you call PM.deletePersistent() on the end of a 1-1 unidirectional relation without the
relation and that object is related to another object, an exception will typically be thrown
(assuming the RDBMS supports foreign keys). To delete this record you should remove
the other objects association first.

Bidirectional:-

For this case you could have 2 classes, User and Account again, but this time as below. Here the
Account class knows about the User class, and also vice-versa.

Here we create the 1-1 relationship with a single foreign-key. To do this you define the MetaData.
The difference is that we added mapped-by to the field of User. This will create 2 tables in the
database, one for User (with name USER), and one for Account (with name ACCOUNT including
a USER_ID). The fact that we specified the mapped-by on the User class means that the foreign-
key is created in the ACCOUNT table.

Things to note :-

• The "mapped-by" is specified on User (the non-owning side) and so the foreign-key is
held by the table of Account (the owner of the relation)
• When forming the relation please make sure that you set the relation at BOTH sides
since JPOX would have no way of knowing which end is correct if you only set one end.

The key to transforming XML into a RDBMS is analyzing the relationships in an XML document
and then mapping those relationships into a RDBMS.
Let's examine the kinds of relationships utilized by a RDBMS - there are three:
1 to 1 relationship (1:1)
We are only interested in the simplest case - the primary entity must participate in the relationship
but the secondary entity may not. e.g. I own 1 car but my 1 car does not own me (or does it????)
This relationship is modeled by storing the secondary entity's primary key as a foreign key in the
primary entity's table.
2. 1 to N relationship (1:N)
There is only one case for our purposes - the primary entity may possess multiple secondary
entities.
e.g. I own zero or more books.
This relationship is modeled by storing the primary entity's (the '1') primary key as a foreign key
in the secondary entity's (the 'N') table.
3. N to N relationship (N:N)
For the purposes of transforming XML we do not need these
e.g. the relationship between students and classes - each student can have multiple classes and
each class can have multiple students.
This relationship is modeled by creating a new table whose rows hold the primary key from
each foreign table.
30. Non spatial data:SQL, query, processing, operations
Definition:

GIS

GEOGRAPHY DATA

Geography:
The first type, “geography”, will store points, lines, polygons, and collections of these in
latitude/longitude coordinates using a round-Earth model. Most commonly-available data is
given in latitude/longitude coordinates by using GIS, which is referred as spatial data.

Spatial data :
The basic spatial entities are points, lines and areas which can be represented using two different
approaches raster and vector.

Non spatial data:


The numeric data in terms of demography, TP scheme number, final plot number,time is stored in
data base like RDBMS and SQL and referred as non-spatial data.

SQL and role of SQL in GIS:


SQL is a language oriented specifically around relational databases which eliminates a lot of work
which is generally done while using a general purpose programming language like C. Operations
in SQL can operate on entire group of tables as single objects and can treat any quantity of
information extracted or derived from them as a single unit as well.
SQL standard is defined by ANSI (American National Standard Institute) although it is a product
of IBM.
There are two forms of SQL
Interactive
Embedded
Interactive SQL is used to operate directly on a database to produce output for human
consumption.
Embedded SQL consists of SQL commands put inside of programs that are mostly written in
other language (COBOL or PASCAL)

What is a Query?
A query is a command you give your database program that tells it to produce certain specified
information from the tables in the memory.

Queries:
The most common operation in SQL databases is the query, which is performed with the
declarative SELECT keyword. SELECT retrieves data from a specified table, or multiple related
tables, in a database. While often grouped with Data Manipulation Language (DML) statements,
the standard SELECT query is considered separate from SQL DML, as it has no persistent effects
on the data stored in a database. Note that there are some platform-specific variations of SELECT
that can persist their effects in a database, such as Microsoft SQL Server's proprietary SELECT
INTO syntax.[11]

SQL queries allow the user to specify a description of the desired result set, but it is left to the
devices of the database management system (DBMS) to plan, optimize, and perform the physical
operations necessary to produce that result set in as efficient a manner as possible. A SQL query
includes a list of columns to be included in the final result immediately following the SELECT
keyword. An asterisk ("*") can also be used as a "wildcard" indicator to specify that all available
columns of a table (or multiple tables) are to be returned. SELECT is the most complex statement
in SQL, with several optional keywords and clauses, including:

• The FROM clause which indicates the source table or tables from which the data is to be
retrieved. The FROM clause can include optional JOIN clauses to join related tables to one
another based on user-specified criteria.
• The WHERE clause includes a comparison predicate, which is used to restrict the number
of rows returned by the query. The WHERE clause is applied before the GROUP BY
clause. The WHERE clause eliminates all rows from the result set where the comparison
predicate does not evaluate to True.
• The GROUP BY clause is used to combine, or group, rows with related values into
elements of a smaller set of rows. GROUP BY is often used in conjunction with SQL
aggregate functions or to eliminate duplicate rows from a result set.
• The HAVING clause includes a comparison predicate used to eliminate rows after the
GROUP BY clause is applied to the result set. Because it acts on the results of the GROUP
BY clause, aggregate functions can be used in the HAVING clause predicate.
• The ORDER BY clause is used to identify which columns are used to sort the resulting
data, and in which order they should be sorted (options are ascending or descending). The
order of rows returned by a SQL query is never guaranteed unless an ORDER BY clause is
specified

Data processing operations:

Some typical Operations that you can perform are: add records from one table to another table,
import or export spreadsheets or text files, post values from one table to another, and update the
field values in all, or a subset of records; just to name a few.

Data Processing Overview:

In SQL, you can change data at any time by selecting record and entering new values. This
method works well when you are editing a few records, but can become very time consuming
when you are working with hundreds or thousands of records. To handle larger data manipulation
tasks.

Typical Operations might include the following tasks:

• Change all field values to uppercase or lowercase.


• Summarize or cross-tabulate records in a table.
• Mark or Delete duplicate or out-of-date records.
• Add records from one table to another table.
• Import or Export spreadsheets or text files.
• Post values from one table to another table.
• Update the field values in all, or a subset of records.

Key Terms Operations:

Term Description

Operation A process in which SQL manipulates data. Data might come from one or
more tables.

Transaction Table A table used in an Operation which generally is not changed by the
Operation.

Master Table A table used in an Operation which generally is changed by the


Operation.

Result Table Contains the output of an Operation, depending on the Operation type.

Linking Key A common value between records in different tables.

SQL has a variety of Operation types that let you transform data. The following table describes it.

Operation Description

Mark, Unmark, and Marks, unmark, or deletes duplicate records in the master table.
Delete Duplicate records

Export and Import Sends to and receives records from common file formats, such as
ASCII text and those used by Microsoft Excel and Lotus 1-2-3.

Post data Adds, subtracts, or replaces values in the master table with values from
matching records in the transaction table.

Query records Selects and sorts specific records in a table, and saves the query for
future use.

Update records Changes values in the master table using criteria you specify.
Convert case of fields Changes text to uppercase, lowercase, or mixed case in one or more
fields.

Search and replace text Searches in one or more fields for a value, and replaces it with another
value.

Copy records Copies selected records from a table or a set to a new table, the result
table. You can use copy with a set, to copy values from multiple tables
to a single table.

Cross tab Creates a result table whose field names correspond to field values in
the master table. The field data are cross tabulated summary values.

Intersect records Creates a result table with records that are common to both the master
and transaction tables.

Join tables Create a result table containing fields from both the master and
transaction tables.

Subtract records Creates a result table by subtracting records in one table from another
table.

Summarize records Creates a result table that summarizes records in the input master table.

SQL can help you perform complex Operations, such as Update Operations, that can do the
following tasks:

• Search for a string in a field and replace it with another string.


• Assign a constant value to a field.
• Break a single name into its parts; for example, separate first name and last name fields.
• Break a city, state, zip field into its parts; for example, separate city, state, and zip fields.
• Assign serial values to a field; for example, record 1: A100, record 2: A101, record 3:
A102, and so on.
• Assign random values to a field.
• Assign a constant value to field, or delete a field value.
• Compute the number of days between two date values.
• Compute time interval between two time values

Conclusion:
Thus where in the spatial data deals with space , geography , location of a particular thing , the
non spatial data deals with numbers and time which can be solved by running a query in RDBMS
or SQL server as mentioned in the above assignment.
31Spatial Data Input : Digitization, Error Identification,
Types and Sources of Error, Correction, Editing, Topology
Building.
Definition:
Data input is the process of encoding data into computer-readable format and assigning the
spatial data to a Geographic Information System (GIS).

Spatial data:
The transformation from the spherical geographic grid to a plane coordinate system is called
map projection. Hundreds of map projections have been developed for map making. Every map
projection preserves certain spatial properties while sacrificing other properties. Spatial features
may be discrete or continuous. Discrete features are those that do not exist between observations,
form separate entities and are well individually well distinguishable. for ex. Well, roads, etc.
continuous features exist spatially between observations. Precipitation and elevation are examples
of continuous features. GIS uses two basic data models to represent spatial features:
• vector
• raster.
The vector data model uses points and their x-, y-, coordinates to construct spatial features of
points, lines and areas. The raster data model uses a grid to represent the spatial variation of a
feature. Each cell in the grid has a value that corresponds to the characteristic of the spatial feature
of that location.

Spatial Data Input :


Two basic options for data base construction are a) use existing data (b) create new data. There
are two methods for converting paper maps to digital maps : digitizing by using a digitizing table
or a computer monitor, also called manual digitizing, and scanning. Scanning is preferred over
manual digitizing in most cases, because scanning uses the machine and computer algorithm to
most of the work, thus avoiding human errors caused by fatigue or carelessness. Digitizing errors
can be removed through data editing, a part of database construction. One common type of
digitizing error relates to the location accuracy of spatial data, such as missing lines or distorted
lines. Another common type consists of topological errors, such as dangling arcs and unclosed
polygons, which are caused by failure of digitized features to follow the topological relationships
among points, lines, and areas. The input method chosen depends upon several factors such as
accuracy standards, form of output product needed and equipment availability.

Data Models for Spatial Data :


Vector Data
Raster Data
Non – Topological
Topological
Simple Data
Higher Level Data
TIN
Regions
Dynamic Segmentation

Digitization:
Digitizing Vector

Although vector data structure is the choice as the primary form for handling graphical data in
most GIS and CAD packages, vector data acquisition is often more difficult than raster image
acquisition, because its abstract data structure, topology between objects and attributes associated.

In the following, we explain the commonly used methods for getting vector data, their advantages
and drawbacks.

Manual digitizing

Manual digitizing using a digitizing tablet has been widely used. With this method, the operator
manually traces all the lines from his hardcopy map using a pointer device and create an identical
digital map on his computer. A line is digitized by collecting a series of points along the line.

Although this method is straight forward, it requires experienced operator and is very time
consuming. For a complex contour map, it can take a person 10 to 20 days to get the map fully
digitized.

Another major drawback of this method is its low accuracy. The accuracy of manual digitizing
merely depends on how accurate the hardcopy map is duplicated on a computer by hand. The
spatial accuracy level the human hand can resolve is about 40 DPI (dots per inch) in the best case
and will be lower while the operator is tired and bored after working on it for a period of time.
One experiment was done at a university, a group of geography students were asked to digitize
the same map and the final digitized maps were overlaid on top of each other to create a new map.
The result is not surprising, the new map is heavily distorted as compared to the original map.

Manual digitizing is supported by most GIS packages with direct link to a digitizing tablets
through a computer I/O port.

Heads-Up Digitizing and Interactive Tracing

Heads-up digitizing is similar to manual digitizing in the way the lines have to be traced by hand,
but it works directly on the computer screen using the scanned raster image as backdrop. While
lines are still manually traced, the accuracy level is higher than using digitizing tablet because the
raster images are scanned at high resolution, normally from 200 DPI to 1600 DPI. With the help
of the display tools, such as zoom in and out, the operator can actually work with the resolution of
the raster data therefore digitize at a higher accuracy level. However, the accuracy level is still not
guaranteed because it is highly dependent on the operator and how he digitizes. This method is
also time-consuming and takes about same amount of time as the manual digitizing method.

The interactive tracing method automates individual line tracing process by tracing one line at a
time under the guidance of the operator. This is a significant improvement over manual heads-up
digitizing in terms of digitizing accuracy and speed, especially when fully automatic raster to
vector conversion can not be applied in cases such as low image quality and complex layers. The
main advantage of using interactive tracing is the flexibility of tracing lines selectively and better
operator control.

Automatic Digitizing:

Two digitizing methods are considered here: scanning and automatic line following. Scanning is
the most commonly used method of automatic digitizing. Scanning is an appropriate method of
data encoding when raster data are required, since this is the automatic output format from most
scanning software. Thus, scanning may be used to input a complete topographic map that will be
used as a background raster data such as pipelines or cables. In this case a raster background map
is extremely useful as a contextual basis for the data of real interest. Another type of automatic
digitizer is the automatic line follower. This encoding method might be appropriate where digital
versions of clear, distinctive lines on a map are required ( such as country boundaries on a world
map, or clearly distinguished railways on a topographic map). The method mimics manual
digitizing and uses a laser – and light – sensitive device to follow the lines on the map. Whereas
scanners are raster devices, the automatic line follower is a vector device and produces output as
(x,y) coordinate strings.

TOPOLOGY

Topology is implemented as a set of integrity rules that define the behavior of spatially related
geographic features and feature classes. Topology rules, when applied to geographic features or
feature classes in a geodatabase, enable GIS users to model such spatial relationships as
connectivity (are all of my road lines connected?) and adjacency (are there gaps between my
parcel polygons?). Topology is also used to manage the integrity of coincident geometry between
different feature classes (e.g., are the coastlines and country boundaries coincident?).

Why Is Topology Needed?

Topology applies GIS behaviors to spatial data. Topology enables GIS software to answer
questions such as adjacency, connectivity, proximity, and coincidence. In ArcGIS, a topology
provides a powerful and flexible way for users to specify the rules for establishing and
maintaining the quality and integrity of your spatial data. You want to be able to know, for
example, that all your parcel polygons completely form closed rings, they don't overlap one
another, and there are no gaps between parcels. You can also use topology to validate the spatial
relationships between feature classes. For example, the lot lines in your parcel data model must
share coincident geometry with the parcel boundaries.

How Is Topology Modeled in the Geodatabase? In ArcGIS, a topology can be defined for one or
more of the feature classes contained in a feature data set. It can be defined for multiple point,
line, and polygon feature classes. A topology is a set of integrity rules for the spatial relationships
along with a few important properties: a cluster tolerance, feature class ranks (for coordinate
accuracy), errors (rule violations), and any exceptions to the rules you've defined. ArcEditor and
ArcInfo include a topology wizard to select which feature classes will participate in a topology
and define these properties.

Topology rules

Topology rules can be defined for the features within a feature class or for the features between
two or more feature classes. Example rules include polygons must not overlap, lines must not
have dangles, points must be covered by the boundary of a polygon, polygon class must not have
gaps, lines must not intersect, and points must be located at an endpoint. Topology rules can also
be defined for the subtypes of a feature class. Geodatabase topology is flexible since you select
which rules apply to the data in your feature class or feature data set.
Topology Properties

The cluster tolerance is similar to the fuzzy tolerance. It is a distance range in which vertices are
considered coincident. Vertices and endpoints falling within the cluster tolerance are snapped
during the validate topology process.

Coordinate accuracy ranks are defined at a feature class level and control how much the features
in that class can potentially move in relation to features in other classes when a topology is
validated. The higher the rank (one being the highest), the less the features move during the
validate process.

Geodatabase Topology Benefits

The ArcInfo coverage model explicitly defines, stores, and maintains the topological information
within the coverage structure and employs a fixed set of tools for creating and maintaining
topology. The result is a tightly controlled environment in which the work flow is dictated by the
software and topological integrity is steadfastly maintained. The data model does not allow much
flexibility. Thus, application development (ArcEdit macros) for editing is required to build and
maintain more sophisticated data models than many GIS applications require.

In ArcGIS, geodatabase topology provides a powerful, flexible way for you to specify the rules
for establishing and maintaining the quality and integrity of your data, as well as providing a suite
of tools specifically designed to support topological geodatabase editing and maintenance (see
sidebar). The benefits of defining a topology in the geodatabase model include

• Better data management--You select which feature classes participate in a topology.


• More flexibility--Multiple polygon, point, and line feature classes can participate in a
topology.
• Improved data integrity--You specify the appropriate topological rules for your data.
• More opportunities for data modeling--A much greater number of possible spatial
constraints can be applied to your data.
• ArcSDE multiuser environment--Take advantage of ArcSDE and the multiuser editing
environment.
• Large map layers--Extremely large continuous map layers are stored in a single database.

Topology in the geodatabase model offers a more flexible environment along with the ability to
define and apply a wider set of integrity rules and constraints. As a result, almost any work flow
can be employed in which topological integrity is analyzed only at designated times specified by
the user. The user is no longer forced to rerun a clean command to rebuild topology. The user can
choose to validate the geodatabase topology at any time, perform queries and analyses using the
geodatabase data, and continue to produce high-quality maps.

ERRORS IN DIGITIZATION

Error–Flaw in data Error is the physical difference between the real world and the GIS.Goes
beyond mere mistakes. Includes technical issues such as GIS operations, processing algorithms,
misuse of statistics, operator bias, equipment quality, etc.

Spatial data errors can occur in each of the methods listed above. And because data is shared
among many in the GIS community and used for legal matters the spatial data set should identify
its data quality. Spatial data documentation should include the history of a data set, the source
date, positional and attribute accuracy, completeness of the data set, and the processing method
used to create the spatial data. Knowledge of this information helps the user to determine the
usability and liability of spatial data. The ability to identify and rectify spatial data errors allows
the user to get the maximum quality and usage out of a data set.

The GIS conundrum – looks good does not mean it is good

– GIS analyses usually incorporate data collected from diverse sources


– Often the data can superficially look good but still contains errors that limit is utility for
GIS analysis

TYPES OF ERRORS

Errors can be classified into three types

• Spatial errors
• Attribute errors
• Procedural/Analytic errors

Errors can generally occur at three phases in GIS analysis

• Data collection phase


• Data input and editing phase
• Methodological phase

• Satellite sensors and aerial cameras can introduce error


• Surveying equipment and GPS instruments have associated errors
• Field recorders or instruments may not be able to always accurately capture the data
• Original map documents have inherent inaccuracies
• Features change over time (modified, destroyed, added)

• Digitizing Errors
– Systematic errors are often related to inaccurate geo-registration
– Random errors can be introduced by missed or inaccurately drawn features
• Attribute data entry errors
– Humans often make errors in transcribing attributes into GIS
• Equipment Errors
– Occasionally scanners, digitizing tablets, etc. can go off calibration

• Error is closely related to accuracy (i.e., Higher accuracy implies fewer errors).
• Three classes of errors:
– Gross errors – refer to “mistakes”. They can be detected and avoided via well-
designed and careful data collection.
– Systematic errors – occur due to factors such as human bias, poorly calibrated
instruments, or environmental conditions.
– Random errors – They cannot be avoided and can be treated with
mathematical/statistical models.

Topological Errors in vector system

(a) Effects of tolerance on topological cleaning


(b) Topological ambiguities in raster to vector conversion
– Features in digitized data contain artifacts that violate the topological rules of the
feature type

Undershoots Overshoots

Dangling node: Acceptable in certain circumstances (streams,


roads

Validate Topology Errors

The validate topology operation is used to snap feature geometry where vertices fall within the
cluster tolerance and to check for violations of the specified topology rules. Validate topology
begins by snapping together feature vertices that fall within the cluster tolerance taking into
account the ranks (as described above) of the feature classes. If feature vertices are found within
the cluster tolerance, the features from the feature class with the lowest rank of coordinate
accuracy will be moved to the features with the higher rank. As part of the snapping routine,
validate topology will also add vertices where features intersect if a vertex does not already exist.

Also, any rule violations discovered during validate topology are marked as errors. A complete
error listing is available in the properties of the topology in ArcCatalog and ArcMap. In ArcMap,
errors can be searched for, displayed, or listed in the Error Inspector.

Correcting Errors in the Topology

When an error is discovered during the validate topology operation, the user has three options:

1. Correct the error using the Fix Topology Error tool or some other method.
2. Leave the error unresolved.
3. Mark the error as an exception. The Fix Topology Error tool offers a variety of methods
for resolving an error depending on the error and the feature type.
Rasterization errors
Vector to raster conversion can cause an interesting assortment of errors in the resulting data. For
example
• Topological errors
• Loss of small polygons
• Effects of grid orientation
• Variations in grid origin and datum


Topological error in vector GIS:
(a) loss of connectivity and creation of false connectivity
(b) loss of information

Errors in data processing and analysis


• GIS operations that can introduce errors include the classification of data, aggregation or
disaggregation of area data and the integration of data using overlay technique.
• Where a certain level of spatial resolution or a certain set of polygon boundaries are
required, data sets that are not mapped with these may need to be aggregated or
disaggregated to the required level.

Attribute error due to processing


• Attribute error result from positional error (such as the missing ‘hole’ feature in map A
that is present as an island in map B). If one of the two maps that overlaid contains an
error, then a classification error will result in the composite map (polygon BA).
Causes of errors in spatial data

• Measurement errors: accuracy (ex. Altitude measurement or soil samples, usually related
to instruments)
• Computational errors: precision (ex. to what decimal point the data is represented?)
• Human error: error in using instruments, selecting scale, location of samples
• Data model representation errors
• Errors in derived data
Data quality issues: Sources of error in GIS

• Errors arising from our understanding and modeling of reality


o The different ways in which people perceive reality can have effects on how they
model the world using GIS.
• Errors in source data for GIS
o Accurately reproducing an inaccurate paper map simply propagates the error
o All digitization is limited by the resolution of the underlying data source
o Survey data can contain errors due to mistakes made by people operating
equipment, or due to technical problems with equipment.
o Remotely sensed, and aerial photography data could have spatial errors if they
were referenced wrongly, and mistakes in classification and interpretation would
create attribute errors.

• Errors in data encoding


• Errors in data editing and conversion
• Errors in data processing and analysis
• Errors in data output
• Operational errors introduced during manual digitizing
o Psychological errors: Difficulties in perceiving the true centre of the line being digitized
and inability to move the cursor cross-hairs accurately along it.
o Physiological errors: It results from involuntary muscle spasms that give rise to random
displacement.
o Line thickness: The thickness of lines on a map is determined by the cartographic
generation employed.
o Method of digitizing: Point mode and stream mode

Spatial Data Editing :

Refers to the removal of errors from, and updating of, digital maps. Newly digitized maps, no
matter how carefully prepared, always have some errors. Digital maps downloaded from the
internet may contain errors from initial digitizing or from outdated data sources. Spatial Data
Editing covers two types of errors. Location errors such as missing polygons or distorted lines
relate to inaccuracies of map features, while others such as dangling arcs and unclosed polygons
relate to logical inconsistencies among map features. To correct location errors, one often has to
reshape individual arcs and digitize new arcs. To correct topological errors, one must be
knowledgeable about the topological relationships required and use a topology – based GIS
package to help make corrections.

Spatial Data Editing can go beyond individual digital maps. When a study area covers more than
and source map, editing must be expanded to cover errors in matching lines across the map
border. Spatial Data Editing may also include line simplification, line smoothing, and transferring
of map features between maps.

Most GIS packages will provide a suite of editing tools for the identification and removal of
errors in vector data. Corrections can be done interactively by the operator ‘on screen’, or
automatically by the GIS software. However, visual comparison of the digitized data against the
source document, either on paper or on the computer screen, is a good starting point. This will
reveal obvious omissions, duplications and erroneous additions. Systematic errors such as
overshoots in digitized lines can be corrected automatically by some digitizing software, and it is
important for data to be absolutely correct if topology is to be corrected for a vector data set.
Noise may be inadvertently added to the data, either when they were first collected or during
processing. This noise often shows up as scattered pixels whose attributes do not confirm to those
of neighboring pixels. This form of error may be removed by filtering. Filtering is considered in
this book as an analysis technique but in brief, it involves passing a filter ( a small grid of pixels
specified by the user – often a 3x3 pixel square is used) over the noisy data set and recalculating
the value of the central (target) pixel as a function of all the pixel values within the filter. This
technique needs to be used with care as genuine features in the data can be lost if too large a filter
is used.
32. Automating the overlay process
Overlay operations involve the placement of one map layer (set of features) A, on top of a second
map layer, B, to create a map layer, C, that is some combination of A and B. C is normally a new
layer, but may be a modification of B. Layer A in a vector GIS will consist of points, lines and/or
polygons, whilst layer B will normally consist of polygons. All objects are generally assumed to
have planar enforcement, and the resulting object set or layer must also have planar enforcement.
The general term for such operations is topological overlay, although a variety of terminology is
used by different GIS suppliers, as we shall see below. In raster GIS layers A and B are both grids,
which should have a common origin and orientation, if not, resampling is required.

The process of overlaying map layers has some similarity with point set theory, but a large
number of variations have been devised and implemented in different GIS packages. The
principal operations have previously been outlined as the spatial analysis component of the OGC
simple features specification. The Open Source package, GRASS, is a typical example of a GIS
that provides an implementation of polygon overlay which is very similar to conventional point
set theory (Figure1), with functions provided including:

• Intersection, where the result includes all those polygon parts that occur in both A and B

• Union, where the result includes all those polygon parts that occur in either A or B, so is
the sum of all the parts of both A and B

• Not, where the result includes only those polygon parts that occur in A but not in B
(sometimes described as a Difference operation), and

• Exclusive or (XOR), which includes polygons that occur in A or B but not both, so is the
same as (A Union B) minus (A Intersection B)

TNTMips provides similar functionality and uses much the same terminology as GRASS (AND,
OR, XOR, SUBTRACT) under the heading of vector combinations rather than overlay
operations, and permits lines as well as polygons as the “operator” layer (Figure1).

In land suitability assessment, the map overlay technique is often used in conjunction with a
weighting scheme. A person first determines parent maps' weights by his perceptions about the
importance or relative importance of these maps to land suitability. These weight values are then
incorporated into the map overlay process. On the resultant overlaid maps, the higher suitability
scores are always assigned to those sites that have better conditions on the more important parent
maps. Of the two approaches that one can take in determining maps' weights, tradeoff weighting
is more precise than direct assessment, but also more difficult to use because it requires greater
cognitive efforts from the users. This article presents a weighting-by-choosing method that
facilitates the process of making tradeoffs through a series of site selection exercises. By using
hypothetical reference sites as tangible manipulatives, it transforms an otherwise difficult
cognitive task into a simple selection exercise. At present, the method applies to two maps at a
time, but could potentially be extended to multiple maps.

OVERLAY AND COMBINATION OPERATIONS

Figure 1. GRASS overlay operations, v.overlay


Input A Input B

A Intersection B A Union B (A AND B, A must be a polygon)

A NOT B A XOR B (A must be a polygon)


Source: http://grass.itc.it/grass60/screenshots/vector.php

OVERLAY OPERATIONS:
The hallmark of GIS is overlay operations. Using these operations, new spatial elements are
created by the overlaying of maps.
There are basically two different types of overlay operations depending upon data structures:

1.RASTER OVERLAY- It is a relatively straightforward operation and often many data sets can
be combined and displayed at once.
2.VECTOR OVERLAY-The vector overlay, however is far more difficult and complex and
involves more processing.

LOGICAL OPERATORS:
The concept of map logic can be applied during overlay. The logical operators are Boolean
functions. There are basically four types of Boolean Operators: viz., OR, AND, NOT, and XOR.

With the use of logical, or Boolean, operators spatial elements / or attributes are selected that
fulfill certain condition, depending on two or more spatial elements or attributes.

1.VECTOR OVERLAY
During vector overlay, map features and the associated attributes are integrated to produce new
composite maps. Logical rules can be applied to how the maps are combined. Vector overlay can
be performed on different types of map features: viz.,

Polygon-on-polygon overlay
Line-in-polygon overlay
Point-on-polygon overlay

During the process of overlay, the attribute data associated with each feature type id merged. The
resulting table will contain both the attribute data. The process of overlay will depend upon the
modelling approach the user needs. One might need to carry out a series of overlay procedures to
arrive at the conclusion, which depends upon the criterion.

Polygon-on-Polygon Overlay
FIGURE 2:Difference between a Topologic Overlay and a Graphic Over plot

2.Raster Overlay
In raster overlay, the pixel or grid cell values in each map are combined using arithmetic and
Boolean operators to produce a new value in the composite map. The maps can be treated as
arithmetical variables and perform complex algebraic functions. The method is often described as
map algebra. The raster GIS provides the ability to perform map layers mathematically. This is
particularly important for the modelling in which various maps are combined using various
mathematical functions. Conditional operators are the basic mathematical functions that are
supported in GIS.

Conditional Operators
Conditional operators were already used in the examples given above. The all evaluate whether a
certain condition has been met.

= eq 'equal' operator
<> ne 'non-equal' operator
< lt 'less than' operator
<= le 'less than or equal' operator
> gt 'greater than' operator
>= ge 'greater than or equal' operator

Many systems now can handle both vector and raster data. The vector maps can be easily draped
on to the raster maps.
Raster Overlay

APPLICATION:
A Physical Evaluation of Land Suitability for Rice

The objectives of this study was to establish spatial model in land evaluation for rice
using GIS.

The study area, the lower Namphong watershed, covers an area of about 3000 sq. kms
and is located in Northeast Thailand. A land unit resulting from the overlay process of the
selected theme layers has unique information of land qualities for which the suitability is
based on. The selected theme layers of rice include water availability, nutrient
availability, landform, soil texture and salinization of soil. The theme layers were
collected from existing information and satellite data. Analysis of rainfall data and
irrigation area give the water availability. Spatial information of nutrient availability was
formulated using soil map of Land Development Department. Landform of the area was
prepared from Landsat TM. Soil texture and salinization of soil are based on the soil map.

Each of the above mentioned layers with associated attribute data was digitally encoded
in a GIS database to create thematic layers. Overlay operation on the layers produce a
resultant polygonal layer, each of which is a land unit with characteristics of the land.
Land suitability rating model applied to the resultant polygonal layer provided the
suitability classes for rice. The resultant suitability class were checked against the rice
yield which collected by the Department of Agriculture Extension. It was found to be
satisfactory.
The evaluation model is defined using the value of factor rating as follows:

Suitability = W x NAI x R x S x T.

FIGURE 3:SCHEMATIC CHART OF GIS APPLICATION TO LAND SUITABLITY FOR


RICE

Results and discussion


The suitability map resulting from the spatial overlay of factors in the Lower Namphong
Watershed is presented in figure 2. The area of suitability evolution is shown in table 1.
Figure 4. Land Suitability for Rice in the Lower Namphong Watershed, Northeast Thailand

Table 1. The suitability area for rice in the lowest Namphong Watershed, Northeast
Thailand
Suitability class Area (km2) %
Highly suitable 208.30 6.97
Moderately suitable 868.26 29.03
Marginally suitable 1265.47 42.32
Unsuitable 530.27 17.73
(Water body) 36.63 1.23
(Village) 81.48 2.72
Total 2990.41 100

The study provides an approach to identify parametric values in modeling the land
suitability for rice. The theme layers to be input in the modeling are assigned the rating
value as attribute data. Overall insight into the factors affecting the suitability of land can
be provided spatially and quantitatively. The result indicated that the highly suitable land
cover an area of about 208.3 km2 and is restricted to the irrigated areas with high NAI.
Some 17.73 percent of the watershed is unsuitable area for rice which corresponds to the
sloping land. It has become increasingly apparent that computer based GIS and remote
sensing data can provide the means to model land suitability effectively.

To assess the reliability of the methodology developed, the suitability classes were
checked against the rice yield. The rice yields in he study area, were on average 4171.87,
2968.75 and 2078.12 kg/ha for the unit of class generated S1, S2 and S3 respectively. For
moiré accurate results, average rice yields should be periodically collected, possibly 4-5
continues years. This will need further investigation to establish the resultant in relation to
rice yield.

In conclusion, with analysis of spatial modeling it is possible to assess the land suitability
with higher accuracy. In addition the modeling provided an approach to the improvement
of rice yield by enhancing the component of modeling input.
33 Raster Based Analysis, Map Algebra, Grid Based
Operations, Local, Focal, Zonal & Global Functions
Raster Based Analysis :-

Raster analysis is similar in many ways to vector analysis. The major differences between raster
and vector modeling are dependent on the nature of the data models themselves. In both raster
and vector analysis, all operations are possible because datasets are stored in a common
coordinate framework. Every coordinate in the planar section falls within or in proximity to an
existing object, whether that object is a point, line ,polygon, or raster cell.
Raster analysis, on the other hand, enforces its spatial relationships solely on the location of the
cell. Raster operations performed on multiple input raster datasets generally output cell values
that are the result of computations on a cell-by-cell basis. The value of the output for one cell is
usually independent of the value or location of other input or output cells. In some cases, output
cell values are influenced by neighboring cells or groups of cells, such as in focal functions.
Raster data are especially suited to continuous data. Continuous data change smoothly across a
landscape or surface. Phenomena such as chemical concentration, slope, elevation, and aspect
are dealt with in raster data structures far better than in vector data structures. Because of this,
many analyses are better suited or only possible with raster data.

Why We Use Raster Gis

Raster is better suited for spatially continuous data like elevations


Raster is better for creating visualizations and modeling environmental phenomena
Other continuous data may include: pH, air pressure, temperature, salinity, etc..
Raster data is a simplified realization of the world, and allows for fast and efficient
processing
A raster GIS performs geoprocessing tasks on a grid based realization of the world

Raster Analysis Basics

GISs can display data in various formats but usually can only use data in a specific format
(e.g. ArcGIS can only analyze grids).
Raster analysis is based on the cell as the basic unit of analysis
– Can perform analysis on individual cells
– Can analyze data on a group of cells
– Can perform analysis on all cells within a grid
Analysis can operate on single raster grids or multiple raster grids
Data Analysis Environment
– Specifies the extent of the analysis area
– Specifies the cell size of the output grid
Mask Grid
– Can Also be used to define the area of analysis

Map Algebra :-

Like most of the analytical frameworks embodied in current GIS packages, map algebra is
primarily oriented toward data that are static. Each layer is associated with a particular moment
or period of time, and analytical capabilities are intended to deal with spatial relationships. In its
original form, map algebra was never intended to handle spatial data with a temporal component.
However, as the availability of spatio-temporal data has increased dramatically in recent years
due to the growth of satellite remote sensing and other technologies, and as the sophistication of
things such as video games and animation in the motion picture industry has raised popular
expectations for spatio-temporal processing capabilities there has also been an increasing
demand for the spatio-temporal extension of GIS.

Map Algebra
Map algebra is a cell by cell combination of raster layers using mathematical operations

– Unary – one layer

– Binary – two layers

Basic Mathematical Operations

– Addition, subtraction, division, max, min, virtually any mathematical operation you
would find in an Excel

– Strong analytical functions.

Some Map Algebra Commands In Arc/Info

Outgrid = grid1 + grid2

Outgrid = grid1 * 2

Outgrid = sin(grid1)

Outgrid = costallocation(sourcegrid, costgrid, accumgrid, backgrid)

Outgrid = con(>5 (ingrid1),0,ingrid1)

Outgrid = select(grid1, ‘VALUE = 10’)

Map
algebra and raster GIS is quite simple to visualize in a spread sheet. An example of
multiplication and addition
The use of arrays make map algebra and raster GIS very computationally efficient
But, be careful of :
Layers that are not coincident
Different cell sizes
Map algebra can be extended to performing a number of mathematical operations.
The computer will allow you to perform virtually any mathematical calculation.
For example, you can create a grid where water features are 0 and land values are 1.
Then, you can multiply this grid with an elevation map. The output will include 0’s
where water existed (x * 0 = 0), and the original elevation value where land existed (x * 1
= x)
Or, you can add the elevations and the grid with 0’s and 1’s together (but, it would be
meaningless!)

Grid1 * Grid2 = Grid3

Grid1 Grid2 Grid3


0
1 0

Can’t build in the 0 area


since there is water
Grid Based Operations :-

ArcGIS can deal with several formats of raster data. Although ArcGIS can load all supported
raster data types as images, and analysis can be performed on any supported raster data set, the
output of raster analytical functions are always ArcInfo format grids. Because the native raster
dataset in ArcGIS is the ArcInfo format grid, from this point on, the term grid will mean the
analytically enabled raster dataset.

Grid Layers

Grid layers are graphical representations of the ArcGIS and ArcInfo implementation of the raster
data model. Grid layers are stored with a numeric value for each cell. The numeric cell values are
either integer or floating-point. Integer grids have integer values for the cells, whereas floating-
point grids have value attributes containing decimal places.

Cell values may be stored in summary tables known as Value Attribute Tables (VATs) within the
info subdirectory of the working directory. Because the possible number of unique values in
floating-point grids is high, VATs are not built or available for floating-point grids.

VATs do not always exist for integer grids. VATs will exist for integer grids that have:

A range of values (maximum minus minimum) less than 100,000 and


A number of unique values less than 500

It is possible to convert floating-point grids to integer grids, and vice versa, but this frequently
leads to a loss of information. For example, if your data have very precise measurements
representing soil pH, and the values are converted from decimal to integer, zones which were
formerly distinct from each other may become indistinguishable.

Grid zones are groups of either contiguous or noncontiguous cells having the same value. Grid
regions are groups of contiguous cells having the same value. Therefore, a grid zone can be
composed of 1 or more grid regions.

Although Raster Calculations (which will be discussed shortly) can be performed on both integer
and floating-point grids, normal tabular selections are only possible on integer grids that have
VATs. This is because a tabular selection is dependent on the existence of a attribute table. Those
grids without VATs have no attribute tables, and are therefore unavailable for tabular selections.

There are a large number of basic grid operations supported for image and general raster files.
These include local, focal and zonal operations depending on the scope of the operation. Such
operations may be applied to a single grid, or to a number of input grids, depending on the
operation in question. The set of possible operations of this type are often referred to as Map
Algebra. Originally this term was introduced by Tomlin (1983) as the process of map
combination for co-registered layers with rasters of identical size and resolution. Combinations
involved arithmetic and Boolean operations. However the term is now used more widely by many
suppliers. For example, ArcGIS describes the set of all operations performed using its Spatial
Analyst option as “Map Algebra”. More specifically it divides such functions into five main
categories, the three above plus Global and Application-specific:

Local functions, which include mathematical and statistical functions, reclassification, and
selection operations
Focal functions, which provide tools for neighbourhood analysis
Zonal functions, which provide tools for zonal analysis and the calculation of zonal
statistics
Global functions, which provide tools for full raster layer or raster dataset analysis, for
example the generation of Cost Distance rasters
Application functions, specific to tasks such as hydrology and geometric transformation

Grid Function Types


There are three basic categories of functions for the creation of new grids: global, focal, and
zonal.

Local Functions

Most grid operations perform their algorithm on every cell in the dataset. You can think of the
local function calculation engine as starting at once cell location, performing a calculation once on
the inputs at that location, and then moving on to the next cell location, and so on. Here is a global
function, where the individual output grid cell values are the result of the local sine function
performed on every input cell. Most of the functions that create new grids based on analyses
performed on vector layers are local functions.

Raster analysis local operations

Single grid local operations


Compute new values for each grid cell to create a new output grid

Mathematical operations
Reclassification

Multiple grid local operations

Create a new output grid by combining data from multiple grids


Similar to vector based overlay techniques but conceptually simpler
Can use these to compute summary statistics

Global Functions

Global functions perform operations based on the input of the entire grid. Functions such as
calculating distance grids and flow accumulation require processing of the entire grid for creating
output.

Focal Functions
Certain grid operations do consider neighborhoods, so that the output cell is the result of a
calculation performed on either a group of cells determined by a window of cells (known as a
kernel or focus) around the cell of interest. These operations are called focal functions. For
example, a smoothing (low-pass filter) algorithm will take the mean value of a 3-x-3 cell kernel,
and place the output value in the location of the central cell. If the kernel contains locations that
are outside of the grid, these locations are not used in the calculation.

In this focal mean example, the outlined cells in the input grid are averaged, and the resultant
value is placed in the center cell of the kernel in the output grid. This is done for every 3-x-3
neighborhood in the input.

Zonal Functions

Other operations perform functions based on a group of cells with a common value (a zone) in
one of the inputs. The group of cells is known as zonal functions, since they calculate single
output values for a group of cells based the location of the input zone.

Here, the zones are defined by the zone grid. The function is a zonal sum, which sums all
the input cells per zone, and places the output in each corresponding zone cell in the
output. The zone boundaries are included only for illustrative purposes, and are not

actually part of the dataset.

Raster analysis zonal operations


Zonal operations operate on groups of cells that share the same values
– Zones may be contiguous or non-contiguous
– Zonal operations can work on single or multiple grids
Single grid zonal operations
– Identify the boundary of zones that contains the same values
– Identify the center of zones where similar values exist
Multiple grid zonal operations
– Summarizes the cell values for one grid based on the cell values of another grid

Performing Grid Analysis

Raster analytical functions are performed in a number of different ways:

1. The Spatial Analyst toolbar


2. Arc Toolbox tools
3. Scripting
4. Command line
34. Vector Based Analysis : Multilayer Operations : Union,
Intersection, Clip
Introduction:
The basic data types in a GIS reflect traditional data found on a map. Accordingly, GIS
technology utilizes two basic types of data.
a. Spatial data describes the absolute and relative location of geographic features.
b. Attribute data describes characteristics of the spatial features. These characteristics can be
quantitative and/or qualitative in nature. Attribute data is often referred to as tabular data.

Spatial Data Models:


Traditionally spatial data has been stored and presented in the form of a map. Three basic types of
spatial data models have evolved for storing geographic data digitally. These are referred to as:
Vector, Raster; and Image.

Vector Data:
Basic entities of a vector data are point, line, node, segment, and polygon.
Point: (x,y) coordinate pair, the basis of all higher order entities;
Line: a straight line feature joining two points.
Node: the point defining the end of one or more segments;
Segment: a series of straight line sections between two nodes;
Polygon: (Area, Parcel): an area feature whose perimeter is defined by a series of enclosing
segments and nodes.

Advantages Disadvantages

Data can be represented at its original The location of each vertex needs to be stored explicitly.
resolution and form without
generalization.
Graphic output is usually more For effective analysis, vector data must be converted into a
aesthetically pleasing (traditional topological structure. This is often processing intensive and
cartographic representation); usually requires extensive data cleaning.
Since most data, e.g. hard copy maps, is Algorithms for manipulative and analysis functions are
in vector form no data conversion is complex and may be processing intensive. Often, this
required. inherently limits the functionality for large data sets, e.g. a
large number of features.
Accurate geographic location of data is Continuous data, such as elevation data, is not effectively
maintained. represented in vector form. Usually substantial data
generalization or interpolation is required for these data
layers.

Vector Overlay Processing - Specific Theory

In a vector-based system topological map overlay operations are much more complex than the
raster-based case, as the topological data is stored as points, lines and/or polygons. This requires
relatively complex geometrical operations to derive the intersected polygons, and the necessary
creation of new nodes (points) and arcs (lines), with their combined attribute values.
In a vector-based system, topological map overlay operations allow the
polygon features of one layer to be overlaid on the polygon, point, or line features of another
layer. Depending on the objectives of the Overlay operation, different output features can result.

General Concepts Of Polygon Overlay Operations

• In GIS, the normal case of polygon overlay takes two map layers and overlays them
• each map layer is covered with non-overlapping polygons
• If we think of one layer as "red" and the other as "blue", the task is to find all of the
polygons on the combined "purple" layer

• Attributes of a "purple" polygon will contain the attributes of the "red" and "blue"
polygons which formed it
o can think of this process as "concatenating" attributes
o usually a new attribute table is constructed that consists of the combined old
attributes, or new attributes formed by logical or mathematical operations on the
old ones
• Number of polygons formed in an overlay is difficult to predict
o there may be many polygons formed from a pair of "red" and "blue" polygons,
with the same "purple" attributes
• When two maps are overlaid, will result in a map with a mixture of 3 and 4 arc
intersections
o four arc intersections do not generally occur on simple polygon maps

Operations requiring Polygon Overlay

Windowing

• The windowing operation, in which a window is superimposed on a map and everything


outside the window is discarded, is a special case of polygon overlay

Buffering

• Buffering around points, lines and polygons is another case


• buffers are generated around each point or straight line segment
• the combined buffer is found by polygon overlay
Planar Enforcement

• The process of building points, lines and areas from digitized "spaghetti"
• wherever intersections occur between lines, the lines are broken and a point is inserted
• The result is a set of points, lines and areas which obey specific rules.

Classification of Vector Overlay Operations

Topological vector overlay operations can be classified via two methods:


Through the elements contained in the layers to be overlaid or by operation type (for example;
the user wants to generate a layer comprising of the Union, Intersection, or some other Boolean
operation of the two input layers).
When classifying the vector overlay operation via method one, the element types each layer
contains are considered. The following table identifies which overlay options exist for each
possible combination of element types contained in the two input layers.

Input layer element types Points Lines Polygons


Points Points Coincide Point int Line Point in Polygon
Lines Point in Line Line Intersection Line in Polygon
Polygons Point in Polygon Line in Polygon Polygon Overlay

Vector Overlay Processing - Algorithms

Since vector-based topological map overlay operations involve overlaying the point, line, or
polygon features of one layer on the polygon features of another layer, the three following
processing algorithms are fundamental:

Point-in-Polygon

Line-in-Polygon

Polygon-on-Polygon (i.e. Polygon Overlay)

1. Point-in-Polygon Processing
Point features of one input layer can be overlaid on polygon features of another input layer, Point-
in-Polygon analyses identify the polygon within which each point falls. The result of a Point-in-
Polygon overlay is a set of points with additional attributes (i.e. those attributes of the polygon
which the point lies within).
The basic algorithm used to perform Point-in-Polygon analyses is detailed below:

Usually a minimum bounding rectangle for the polygon is defined within the system - by its
maximum and minimum coordinates. It is easy to determine if a point (or line end) is inside or
outside the rectangle’s extent. If the point lies outside the minimum bounding rectangle, then it
also must lie outside the polygon and the analyses is complete.
However if the point falls inside the minimum bounding rectangle then the following
further processing is required:
From the point, a line parallel to an axis is drawn (usually either the X or Y axis).This parallel line
extends from the point (or line end) to beyond the extremities of the polygon, with its direction
usually towards the highest values of this axis.

The system then counts the number of times this “half line” intersects with the polygon boundary.
If the result is an even number (or zero), then this indicates that the point lies outside the polygon.
If the result is an odd number, then this indicates that the point falls inside the polygon.
The Point-in-Polygon algorithm described above works very well for special cases of “island”
polygons, polygons with holes, and concave polygons.

However, problems occur if a point falls:


Exactly on a boundary,
On a node or a vertex, or
When a line segment is collinear to the half-line.

2. Line-in-Polygon Processing
Polygon features of one input layer can be overlaid on lines (arcs) of another input layer. A line
can be made up of many segments, Line-in-Polygon analyses therefore identifies which polygon
(if any) contains each line or line segment. The result of a Line-in-Polygon overlay is a new layer
containing lines with additional attributes (i.e. those attribute of the polygon within which the line
falls). Sometimes a line segment falls directly on a polygon boundary instead of within the
polygon. In this special case, the additional line attributes will contain the attributes of both
polygons - lying to the left and right sides of the line.

As lines and polygons are both made up of line segments, Line-in-Polygon analysis requires the
determination of whether any of these overlaid line segments intersect. The task of determining
whether two line segments intersect consists of a simple mathematical calculation; however the
complexity of this operation is increased by the number of line intersection checks that need to be
made for a complete Line-in-Polygon overlay analysis. Therefore Geographical Information
Systems use the following algorithm to minimize the number of calculations required.
• Minimum bounding rectangles of both the line and the polygon are used to reduce the
number of computations required. If no intersection occurs then a check is made to
determine whether the minimum bounding rectangle of the line falls completely outside
the minimum bounding rectangle of the polygon (defined by the element’s minimum and
maximum coordinates). If this is the case then the line definitely does not lie within the
polygon, and the analysis is complete, otherwise the following further processing is
required:
• As the line may be made up of many line segments, each line segment has to be tested for
intersection or inclusion within the polygon. If the line segment lies outside the polygon
minimum bounding rectangle, then that segment also lies outside the polygon and can be
disregarded, otherwise the following processing must continue:
• Testing whether a line segment is totally inside a polygon or not can be difficult because
polygons can have concavities or holes within them, therefore it is not enough to simply
determine if both end-points of a line segment lie within the polygon. To deal with this
problem, the polygon and line segment are both rotated such that the line segment lies
parallel to one of the axis (X or Y).
• The next step uses the “half-line” test (as described in the Point-in-Polygon Analyses
theory above) along the axis parallel to the line segment to determine whether each
segment end-point is in or out and note all segment intersections with the polygon. Note
that half-line intersection points are not necessarily also segment intersection points.
• If the results of the half-line testing show that both points are in and there were no
segment intersections, therefore the whole line lies inside the polygon. Otherwise, if a
point starts outside, then that first part of the line segment is outside the polygon until the
first segment intersection point, the next part of the line segment is inside the polygon
until the next segment intersection point, and so on.

3. Polygon-on-Polygon Processing

This process merges overlapping polygons from two input layers to create new polygons in an
output layer. The result of a Polygon-on-Polygon overlay is an output layer containing new
polygons with merged attributes (ie. those attributes from each of the two overlapping polygons).

Note: As polygons are made up of line segments, Polygon-on-Polygon analysis requires the
determination of whether these overlaid line segments intersect. The processing for Polygon-on-
Polygon analysis is therefore essentially the same as for Line-in-Polygon analysis (as detailed in
the Line-in-Polygon theory above).
35. Spatial representian of geographic data in raster and
vector models
Abstract:

There is a growing need to move away from traditional interpretation of data analysis through
the manual mapping and manual data base management system of whose accuracy is suspected.
Map making and geographic analysis are not new, but a GIS performs these tasks faster and with
more sophistication than do traditional manual methods. A Geographic Information System (GIS)
is a computer-based tool for mapping and analyzing existent things and events that happen on
earth. GIS technology integrates common spatial database operations such as query and
statistical analysis with the unique visualization and geographic analysis benefits offered by
maps. Here is an attempt to explain the basic concepts on spatial data representation of
geographical features in vector and raster models which are important in understanding the
components of GIS.

Introduction:
Spatial data in GIS has two primary data formats: raster and
vector. Raster uses a grid cell structure, whereas vector is
more like a drawn map. Raster format generalizes the scene
into a grid of cells, each with a code to indicate the feature
being depicted. The cell is the minimum mapping unit.
Raster has generalized reality: all of the features in the cell
area are reduced to a single cell identity. The raster cell’s
value or code represents all of the features within the grid; it
does not maintain true size, shape, or location for individual
features. Even where “nothing” exists (no data), the cells
must be coded.
Vector format has points, lines, polygons that appear normal,
much like a map. Vectors are data elements describing
position and direction. In GIS, vector is the map-like drawing
of features, without the generalizing effect of a raster grid.
Therefore, shape is better retained. Vector is much more
spatially accurate than the raster format.

Raster Model:

All spatial data models are approaches for storing the spatial
location of geographic features in a database. Vector storage
implies the use of vectors (directional lines) to represent a
geographic feature. Vector data is characterized by the use of
sequential points or vertices to define a linear segment. Each
vertex consists of an X coordinate and a Y coordinate.

Raster is a method for the storage, processing and display of


spatial data. Each area is divided into rows and columns,
which form a regular grid structure. Each cell must be
rectangular in shape, but not necessarily square. Each cell
within this matrix contains location co-ordinates as well as
an attribute value. The spatial location of each cell is
implicitly contained within the ordering of the matrix,
unlike a vector structure which stores topology explicitly.
Areas containing the same attribute value are recognized as
such, however, raster structures cannot identify the
boundaries of such areas as polygons.

Raster data is an abstraction of the real world where spatial


data is expressed as a matrix of cells or pixels, with spatial
position implicit in the ordering of the pixels. With the
raster data model, spatial data is not continuous but divided
into discrete units. This makes raster data particularly
suitable for certain types of spatial operation, for example
overlays or area calculations.

Raster structures may lead to increased storage in certain situations, since they store each cell in
the matrix regardless of whether it is a feature or simply '
empty'space.

Grid size and resolution:

A pixel is the contraction of the words picture element.


Commonly used in remote sensing to describe each unit
in an image. In raster GIS the pixel equivalent is usually
referred to as a cell element or grid cell. Pixel/cell refers
to the smallest unit of information available in an image
or raster map. This is the smallest element of a display
device that can be independently assigned attributes such
as color.

Raster data models incorporate the use of a grid-cell data


structure where the geographic area is divided into cells
identified by row and column. This data structure is
commonly called raster. While the term raster implies a
regularly spaced grid other tessellated data structures do
exist in grid based GIS systems. In particular, the
quadtree data structure has found some acceptance as an
alternative raster data model.

The size of cells in a tessellated data structure is selected on the basis of the data accuracy and the
resolution needed by the user. There is no explicit coding of geographic coordinates required
since that is implicit in the layout of the cells. A raster data structure is in fact a matrix where any
coordinate can be quickly calculated if the origin point is known, and the size of the grid cells is
known. Since grid-cells can be handled as two-dimensional arrays in computer encoding many
analytical operations are easy to program. This makes tessellated data structures a popular choice
for many GIS software. Topology is not a relevant concept with tessellated structures since
adjacency and connectivity are implicit in the location of a particular cell in the data matrix.
Several tessellated data structures exist, however
only two are commonly used in GIS' s. The most
popular cell structure is the regularly spaced matrix or
raster structure. This data structure involves a division of
spatial data into regularly spaced cells. Each cell is of the
same shape and size. Squares are most commonly utilized.

Since geographic data is rarely distinguished by regularly


spaced shapes, cells must be classified as to the most
common attribute for the cell. The problem of determining
the proper resolution for a particular data layer can be a
concern. If one selects too coarse a cell size then data may
be overly generalized. If one selects too fine a cell size
then too many cells may be created resulting in a large data
volume, slower processing times, and a more cumbersome
data set. As well, one can imply accuracy greater than that
of the original data capture process and this may result in
some erroneous results during analysis.

Vector Model:

In the vector data model, features are represented in


the form of coordinates. The basic unit of data (points,
lines and areas) is composed of a series of one or
more coordinate points. For example a line is a
collection of related points and an area is a collection
of related lines. Vector lines are often referred to as
arcs and consist of a string of vertices terminated by a
node. A node is defined as a vertex that starts or ends
an arc segment. Point features are defined by one
coordinate pair, a vertex. Polygonal features are
defined by a set of closed coordinate pairs.

A point is defined by a single pair of coordinate


values. A point normally represents a geographic
feature that is too small to be represented as a line or
area. For example, a port, a dock, or a hatchery can be
represented as a point depending on the scale of the
map on which it is be shown.

A line is defined by an ordered list of coordinate pairs


defining the points through which the line is drawn.
Linear feature include contour lines, ship tracks and
streams. At most mapping scales these features will
retain their linear form, although the degree of detail
and generalization will vary with scale. A line is
synonymous with an arc.
An area is defined by the lines that make up its boundary.
Areas are also referred to as polygons. Examples include
ocean basins, lagoons, mangroves, lakes, etc. When shown
on maps at a very small scale these features may also
eventually become points.

Raster and Vector Structures:

Raster polygons are filled with cells. For single polygons,


the vector format usually has a single node and several
vertices to mark the boundary direction changes.
Connected polygons are simply two blocks of cells in the
raster format, but in vector they share a common border
and some common nodes.

1. A relatively simple data structure.


2. The simple grid structure makes analysis easier.
3. The computer platform can be “low tech” and
inexpensive.
Remote sensing imagery

ector advantages:
1. In general, vector data is more map-like.
2. Is very high resolution.
3. The high resolution supports high spatial
accuracy.
4. Vector formats have storage advantages.
5. The general public usually understands what
is shown on vector maps.
6. Vector data can be topological.
Vector disadvantages:
1. May be more difficult to manage than raster
formats.
2. Require more powerful, high-tech machines.
3. The use of better computers, increased management needs, and other considerations often
make the vector format more expensive.

.
36 Network Analysis – Concepts & Evaluation
Network Analysis:-

With ArcGIS Network Analyst, users can model real-world transportation networks and solve
routing problems within ArcGIS Desktop, ArcGIS Server, and ArcGIS Engine. This seminar
introduces the ArcGIS 9.1 Network Analyst extension and network dataset. The presenter
demonstrates how the extension solves various problems, such as finding the best route or
closest facility with travel directions, determining service areas, and generating origin-
destination cost matrixes. In addition, the presenter explains how to create simple and multi-
modal network datasets to support various types of network analysis.

The presenter discusses:

* ArcGIS Network Analyst extension

* Various problems it solves

* Network datasets (Sources, attributes, and connectivity)

Network Analysis

*Connectivity tracing

*Cycle detection

*Establishment of flow directions

*Upstream and downstream tracing

*Isolation tracing

*Trouble call tracing

The Geo data base Data Model

Integration of 2 scales road network analysis (country and city)

Basically, travelling for a long distance trip, we need the small scale map for planning the travel
route. The route may pass many cities. We need larger scale map to find an optimal route of the
cities. The small scale network is analyzed first to get the route. The inbound and outbound roads
of each city are identified while analyzing. Then the interesting city network analysis is performed
using the same inbound and outbound road as origin and destination respectively. Together with
the dynamic network analysis and road’s variable traffic speed, we simulate the real situation of
our route.
Integration of 2 scales road network analysis (country and city)

In Thailand, the road sign is not as good as in the developed country. When we want to travel
between 2 cities, we can use GIS network analysis software to find the best route. However, the
selected routes will pass several cities and towns. The problems will start when you enter towns
and cities which do not have good road signs. The result is you may loose the orientation or
direction in them.

Our solution is to do 2 levels network analysis, the first level is country road network, the second
level is cities road network. The linkage between these 2 road networks are created. When the
first level analysis is done. Users can select the town or city which are on the analyzed route, to do
additional network analysis in city level. The city road network analysis is automatically handled
by software.

The software is written by Visual Basic. GIS component is MapObjects. The Network Analysis
module is developed by using Dijkstra’s Algorithm.

Network Data Models

Two levels of network layers must be considered here including country level network model and
city level network model. However, the network data model of these 2 levels are identical. The
details of each level layers and network data model are as followings:

Country level layers

Country network database is created from the map of 1:1,000,000 highway map of Thailand. This
map shows all highway roads and the position of the cities. The road network layer of country
level is captured in Shapefile format. Within the application, the node layer and Node-Arc
topology are built in order to create the relationship between nodes and lines. The unique ID of
each road and node are calculated as well. All information of Node-Arc topology and ID of 2
layers are stored in the Shapefile.

Each node of this level of network can be either an intersection or the city. These types of node
must be specified in the attribute table. In case of the city, the city code must be entered. The
database structure is illustrated in figure 1.

Database Structure of Country level network layers

City level layers

City level layers are digitized from the larger scale map. It illustrates all streets in the city
including an inbound and outbound highway roads. The 2 main layers are the street network and
intersections. The Node-Arc topology in the network must be built before performing the analysis
just like in the country level network. There is an item of Highway ID that contains the unique ID
of the country level network ID for every inbound and outbound road.

In this level of network, the node layer that represents all intersections in the city contains only
the unique ID. The figure 2 illustrates database structure of these 2 layers.

The name of the layers are defined as CityCode_road and CityCode_int for street and intersection
layer respectively.

Figure 2. Database Structure of City level network layers


The linkage between country level and city level networks

There are many ways to create the linkage between these 2 levels of network. In our research, we
use the simplest way. As mentioned above that a city code is kept in the attribute table of country
level node so that user can pick up the city node in the analyzed route, the city code of the
selected node can be retrieved. The application uses this city code in accessing the right city
network layers because the city network layers’ name starts with the city code and are followed
by the suffixes of _road and _int for street and intersection respectively.

Moreover the unique ID of the country level network is recorded in the inbound and outbound
roads of the city level. The application recognizes the inbound road and outbound road of the
selected city in the country level network. This recognition is useful for automatically performing
the network analysis in the city level by using the same inbound and outbound roads.
The figure 3 illustrates the linkage between country and city level networks.

Figure 3. Linkage from country level network to city level network

Network Model

The network topology data model can be built within the application. This topology data model
describes the relationship of nodes and edges. The travel costs data must be specified. There are 2
kinds of travel cost. The first one is the cost for traveling on each edge. The second one is the cost
for turning at intersection or passing through a city. The travel cost of each edge and turn is varied
due to the time of day. User can use either the default cost or user-defined cost.

The origin node must be selected first. The departure time is then specified. The network is
analyzed by using Dijkstra’s Algorithm together with the travel and turning costs of the arrival
time at each road and intersection. This analysis creates the data structure to keep the optimal path
from an origin to each intersection and the total cost of travel from origin to each intersection.
After selecting the destination node, the optimal path is created and travel cost and travel time are
derived.

Both country and city level networks use this network data model when building and analyzing
the network. The only difference is that an origin node, destination node, and departure time of
city level network are derived automatically from the analysis of country level network. The
departure time of city level path is the arrival time of the city node.

Software Architecture
The application software comprises of 3 main modules. These modules are the Network
Topology, Network Analysis module and Network Editor module. Both levels of network can be
processed by using these 3 modules.

Network Topology
The Network Topology module is used for building Arc-Node topology and creating node layer.
Then the network topology is loaded into memory for network analysis. The default edge travel
cost can be loaded by using this module.

Network Analysis
The purpose of Network Analysis module is for selecting the origin node, specifying the
departure time, analyzing the network, selecting the destination node, and creating the optimal
route. After having the optimal route, the city node is selected in order to load the city network
layers, derive necessary information , and calculate the optimal path for the city network.
Network Editor

Network Editor module includes 3 submodules. These submodules are Network creation, Linkage
of networks creation, and Network parameter editor.

In our research, both levels of network were already created in ArcInfo. The ArcInfo coverages
are then converted into Shapefile format.

To create the linkage of networks, ArcView GIS is used to get the unique ID of all inbound and
outbound road of each city node. Then the highway field value of inbound and outbound road of
city level network is filled in.

Basically, the network parameters include the traffic speed along the road and turning cost. The
network parameter editor lets user edit the traffic speed and turning cost in both levels of network.
Example

Conclusion (Integration of 2 scales road network analysis)

The integration of 2 scales road network analysis is practical. Travelers can make an optimal route
plan in a small scale country map. Then they further their analysis within a large scale city map in
order to understand the street pattern and analyze an optimal path from inbound road to outbound
road. With the Network Parameters Editor tools, travelers simulates the virtual road networks of
country and city levels by updating traffic speed and turning cost. However collecting the traffic
speed and turning waiting time of each hour for all roads and intersection is quite difficult.
Another problem is defining a linkage between 2 levels of network. Typically, almost all of city
level maps in Thailand have a poor quality in term of orientation, measurement, and accuracy so
that it takes time to find the identical roads in a country map and city map. If the traffic speed,
turning waiting time, and linkage of networks are completed, the combination of two network
models are an efficient and practical model of the real world road network.

Key Features

Net Engine is versatile and has been designed to facilitate advanced network analysis in
several ways.

Net Engine provides

* Data structures and methods that are optimized for fast retrieval of network connectivity
* A way to efficiently store network data structures to a permanent disk file
* Ready-to-use algorithms such as the shortest path algorithm
* Support for some advanced modeling concepts that facilitate modeling hierarchical and
multimodal transportation networks
* A specialized memory management module that makes efficient use of computer memory
necessary for very large networks
* Support for databases from commercial suppliers, such as Etak, Tele Atlas, and NAVTEQ, as
well as an organization's internal network data sets
* An interface to MapObjects which combines an ActiveX control and more than 45
programmable ActiveX automation objects
* Deployment license options for both stand-alone client- and network server-based systems
37. Network analysis:- C-matrices for evaluating connectivity
of the network
In the real world, objects are connected to each other. Using GIS in support of network utility
management typically involves many types of features that may have connectivity to each other.
Several GIS vendors have developed GIS software whose potential functions can provide for
network management and analyses, but each system has a proprietary format to deal with the
connectivity between geometry or features. Topology in GIS is generally defined as the spatial
relationship between such connecting or adjacent features and is an essential prerequisite for
many spatial operations such as network analysis. There are, in general, three advantages of
incorporating topology in GIS databases: data management, data correction and spatial analysis.
Topology structures provide an automated way to handle digitizing and editing errors, and enable
advanced spatial analyses such as adjacency, connectivity and containment.

Network: a number of people or places among which there are one-on-one interactions –
friendships, airline routes, telephone calls, automobile trips, roads.

• Connectivity: (existing links between two or more objects)


– A null graph is made of two or more nodes without connection between them;
– A linear graph is made of two or more connected nodes;
– A tree graph is made of n nodes connected with n - 1 arcs and without circuits.
• A network can be represented with a graph
• A graph can be seen as a structure made of points (vertex) and lines (arcs) with 2 points
at extremities
• points (nodes) and lines (arcs) have a unique geographic location and represent a spatial
structure.
• Each vertex represents a single node of the network, and each line corresponds to a
connection between 2 nodes.

Network analysis with graphs: Some network measures

1. Beta Index: Compares the number of links with the number of nodes in a network
2. Gamma Index: Compares the actual number of links with the maximum number
3. Alpha Index: compares the number of actual (fundamental) "circuits" with the maximum
number of all possible fundamental circuits
4. Associated Number (Koenig Number): measures the centrality of a node by the number
of links needed to connect this node with the (topologically) most distant node in the
network
5. Shimbel Index: Measure of the minimum number of links necessary to connect one node
with all nodes in the network
6. Diameter of a network: Number of links in the shortest path between the furthest pair of
nodes.
7. Nodal Degree: The sum of the (direct) links which connect a node to adjacent nodes. It
can be calculated by summing rows or columns of (direct) connection matrix. CONS:
Lack of indirect links.

How can we systematically select the shortest distance routes between two points say Oi and
Dj? (Note that for transportation-system modeling or for an individual's decision making, we
need to know alternative routes, ordered according to distance).

First, we must abstract the transport system into a graph or matrix.


The network graph illustrates the network in schematic fashion, with each possible O and D
presented as a point (“node” or “vertex”) and each possible direct link between an O and a D
presented as a straight line segment (“linkage” or “edge”).

The connectivity matrix abstracts the network as a table, with each possible node (vertex)
presented as a row (an origin) and as a column (a destination). Each possible direct link between
an O and a D is presented as a “1” in the appropriate cell; if there is no direct OiDj link, a “0”
appears in the appropriate cell. Thus, network graphs or matrices are two formats for topological
data (TOPOLOGY: connections and distances among spatial data elements). Within GIS,
including topology implies providing the GIS with a spatial data matrix of explicit connections
among points, adjacencies among areas, etc.

For a simple system, the matrix tells you nothing you can’t see in the graph. The matrix is
necessary for three reasons:
1. For a very complex system, such as all the OD connections in a 300-by-300 urban
transportation plan or the diagram of interchanges in the Interstate highway system, the
matrix will instantly give you information you couldn’t easily compile from the graph.
2. Computers cannot “see” graphs, but they’re very good at reading lots of zeros and
ones very quickly.
3. We can perform simple matrix algebra on the matrices, to derive very powerful results.

Matrix manipulation for network analysis: Connectivity

If we multiply the matrix above by itself, yielding C2 (or a squared connectivity matrix), we have
a new matrix that tells us the number of two-edge routes from each O to each D.

If we multiply C2 by C1 to yield C3 , we have the number of three-link connections between each


O and D.
If we add the entries, cell by cell, in C1 , C2 , C3, and C4 , we get a matrix of the total number of
possible ways to get from each O to each D. This is generally called the T-matrix, for total
accessibility.
In some cases, when actual distance is not as very meaningful as the number of connections (e.g.,
airline travel, from the perspective of the traveller; or connections within an integrated circuit; or
connections among linked computers on the Internet), these topological distances are all we need.
However, for walking or driving routes within a city, we care about the total distance or time, not
the number of connections.

• A simple way to approach this is through our graph, this time adding distance or time
values (we call this a valued graph of a network).
• An alternative is an L matrix, with time or distance entered in the cells rather than
“yes/no” to a connection. In the example below, note that we insert “zero” along the
principal diagonal and “infinity” in cells where there is no direct connection.

We can derive a total valued matrix by adding the L matrix to it, repeatedly until we cover the
diameter of the network. This tells us the minimum distance (in time or ground distance)
between each O and each D: a very important piece of information.
These measures are useful in several ways.

1. They provide us with the dij we need for any kind of transportation modeling.
2. They show us the minimum distance between any O and D, expressed in number of links, in
time, or in ground distance.
3. They allow us to understand how an additional edge (link) or the removal of an edge will affect
the accessibility of a node and the connectivity of the network.

How can a GIS make use of this insight to construct minimum-distance routes?

The network of possible routes is entered into the GIS as a topological matrix: what nodes link
directly to what nodes.

The distances don’t have to be explicitly entered, because the GIS has the actual location of each
node; it can calculate the distance along each direct link.

1. It can take the approach outlined above:

• Identify whether the origin and destination share a direct link, and if so, assigning the
route to that link.
• If the origin and destination don’t share a direct link, then identify the first link along the
route, store that, then identify the link from the second node to the desired destination,
and on and on.
• Identify the shortest link from the origin.
• Is this the desired destination? If yes, record the route. If no, use the shortest link from
this intermediate node.
• Is this the desired destination? If yes, record the route. If no, use the shortest link from
this intermediate node.
• And on, until we’ve arrived at the desired destination.
• This minimum branching tree algorithm is not guaranteed to find the closest total route
link.

In either case, the GIS can link multiple destinations, to establish the minimum distance for a
delivery route, such as we’ll be doing in the first case. We’ll come up with a set of customers,
and develop a route among them.

a) A diameter is “the maximum number of steps required to move from any node to any other
node through the shortest possible routes within a connected network.” or “the number of linkages
needed to connect the two most remote nodes on the network.”
b) "An algorithm is a set of mathematical expressions or logical rules that can be followed
repeatedly to find the solution to a question".

c) Each cell (x,y) in a new matrix (AB) which is the product of two other matrices (A and B), is
the sum of:

• the product of the first cell in the Xth row of the matrix A times the first cell in the Yth
column of matrix B, plus
• the product of the second cell in the Xth row of matrix A times the second cell in the Yth
column of matrix B, plus
• the product of the third cell in the Xth row of matrix A times the third cell in the Yth
column of matrix B;
• and on, until we've exhausted the length of the Xth row in matrix A and the length of the
Yth row in matrix B.

Note that the rows of matrix A must have the same length as the columns of matrix B. In this
simple network analysis, we're multiplying a square matrix by itself, so that's not a problem.
In a connectivity matrix, cell (3,1) is 0 if there's no direct link between 3 and 1; 1 if there is. Cell
(1,4) is 0 if there's no direct link between 1 and 4; 1 if there is. What does the product of cell
(3,1) and cell (1,4) tell us? Why would we be interested in adding this product to the product of
(3,2)(2,4), to the product of (3,3)(3,4), to the product of (3,4)(4,4), to the product of (3,5)(5,4)?

Network Connectivity

• Topology
Topology is the common term used to describe physical connectivity between features.
Topology is generally represented by links and nodes. A feature instance is connected to
another feature instance via a connection point. This connection point is described by a
node, and the path between two nodes is described by a link. Topology is derived from
the underlying geometry.
Link and Node model

There are two common properties for the link: cost and direction. Cost is the value which
is taken into account to find the best path. Commonly the cost is the distance of the link
which is adequate for most simple network analysis problems. Direction is used for
specifying which direction the network can travel on that link. There are also two
properties for the node: in/out cost and degree. In/out cost is the accumulated distance
from the starting point that used to find the next distance value at another node of the
same link. Node degree presents the number of links associated with it.

• Directional network
For some applications topological features require direction as well as connection. If we
consider the flow of water in a river, the topology must be modelled to take into account
the flow direction of the water. However for other applications such as analysing boat
traffic on a river it is more sensible to model the network as non directional or two-way.
Moreover in a road network, if we consider the road feature, it may be one way or two
way and as is the case in some cities it may change depending on the time of day etc..
Thus there are requirements to be able to model the direction of connectivity whilst
retaining flexibility to suit the application in question.
There are several ways to handle the directional flow of a network. Some systems use a
special feature to set the directional flow of the link, whereas other systems set the
directional flow in the application using additional coding. This research sets the
directional flow as a property of a line and provides the database structure for the
directional network as a directed line. A Link feature derived from the directed line is a
directed link.

• Connectivity types
In order to model real world complexity we also need to be able to express the concept of
different types of connectivity. Whilst it may be acceptable to allow road features to
connect if they share the same 2D space, it is not appropriate for all situations e.g. fibre
optic cables, water mains etc…

To enable the different types of feature connectivity, we need to model the three ways to
connect two link features: end-connection, middle-connection and cross-connection, and
the two ways connecting link features to node features: end-connection and middle-
connection.

Connectivity types

Network Family
In the real world there are natural groupings of objects; the various types of roads and paths
that make up the road network; rivers, streams, canals, lakes etc. that make up the natural
water network; high voltage cables, low voltage cables and transformers etc. that make up an
electrical network. With some major exceptions these “families” of objects do not
topologically connect with features of other families. The concept of a “network family” is
used for establishing the various rules of connectivity between feature types. Features that do
not belong to the family cannot connect. This mechanism also provides a simple visual means
for the modification of specific connectivity rules and also provides a method for dealing with
semantic issues e.g. “street” and “strasse” can both be mapped onto the network family
feature “road”.

A family contains a collection of real-world features that may have connectivity to each other
in the same network. The example for a simple road network family is shown below.

Road Family

Road – Road via junction


Road – Trunk Road via junction
Road – Slip Road via junction
Trunk Road – Trunk Road via junction
Trunk Road – Slip Road via junction
Slip Road – Motorway via junction

A matrix representing the connectivity is shown in Figure 1. The first row and column is a
list of line type features that may have connectivity. The inner cells show the Point type
feature that facilitates connectivity between them.

Figure 1. The Matrix Table of Road Family

The family could also be shown as tree structure by setting the root feature. The view of
the tree structure varies depending on the root selected. However the relationship between
features is still the same. The example of tree structure is show as Figure 2.

Figure 2. The Tree Structure of Road Family

• Connectivity across network families

Network analysis across two families may be required for some applications, e.g. a route
planning application may require movement between the road and the rail network
families. The network can trace across families if there is a common point connection
feature in both families. For instance, a rail station is in both the “road” and the “rail”
families and therefore a trace can cross between them via a rail station.
38 Network Analysis & Network Data Model
Introduction:-

Networks are an integral part of our daily lives. We drive cars from home to work on a street
Network. We cook our dinner with natural gas or electricity that is delivered through networks of
utility lines. We catch up on the news and send e-mail through the Internet, the largest wide area
network.

Defining a Network Data

Network dataset contains network elements (edges, junctions, and turns) that are generated from
simple point and line feature.

Edges are generated from linear features and are connected by junctions. Edges in The network
dataset are bi-directional.

Junctions are generated from point features. They connect edges and facilitate navigation. A
junction may be connected to any number of edges.

Turns are generated from line features or turn tables and describe transitions between Edges

Turns

What Components Make Up Networks?

Networks have two parts: physical network and the logical network.

The physical network consists of the data layers used to generate a network and provides the
features to generate network elements.

The logical network consists of a collection of tables that models network connectivity and
references network element relationships.
Network Analyst

Provides a rich environment with easy-to-use menus and tools as well as the robust functionality
available in the Reprocessing environment for modeling and scripting

Networks are typically either directed flow networks or undirected flow networks. In a directed
flow network, the flow moves from a source toward a sink and the resource moving makes no
travel decisions (e.g., river system).

In an undirected flow system, flow is not entirely controlled by the system. The resource May
make travel decisions that affect the result (e.g. traffic system)

Source

Sink

Areas of application of Networks

•Highway management organizations

•Rail management agencies

•Gas and oil pipeline industries

•Utility industries

•Police and emergency management organizations

•Military planning organizations

•Transit agencies

•Automatic Vehicle Location systems

Uses of Network Analyst

•Drive-time analysis

•Point-to-point routing

•Route directions

•Service area definition


•Shortest path

•Optimum route

•Closest facility

•Origin-destination analysis.

Network Data model:

Data model for transportation are interconnected hardware, software, data, people, organizations
and institutional arrangements for collecting, storing, analyzing and communicating particular
types of information about the earth.

Highlights the multifaceted nature of transportation data model is

Transportation entities have obvious physical descriptions but can also have logical
Relationships with other transportation entities.
Second, entities exist both in the real world and in the database or virtual world. The relationships
between the physical and logical realms are often one-to-many, creating database design
complexities.

Transportation Networks in a GIS: The Node-arc Model

In the basic "node-arc" representation of a transportation network, we deal exclusively with


directed networks (that is, a network consisting of directed arcs) since transportation systems
typically have important directional flow properties (e.g., one-way streets, differences in
directional travel times depending on the time-of-day).
Single node representation of intersection
Expanded representation of interaction

Representing public transit system entrance, egress and transfers

Linear Referencing Systems

A LRS typically consists of the following components


i) A transportation network;
ii) A Location referencing method (LRM),
iii) Datum.
The transportation network consists of the traditional node-arc topological network. The LRM
determines an unknown location within the transportation network using a defined path and an
offset distance along that path from some known location. This provides the basis for maintaining
event data within the network. The datum is the set of objects with "known" (directly measured)
georeferenced locations. The datum ties the LRS to the real world and supports the integration of
multiple networks, multiple LRMs for a given network, multiple event databases and cartographic
display of the data.

Linear referencing methods –road name and mile stone

Case studies:

1. Information system for rural road network planning

In India, Nearly 50% of 6 lakh villages have road access. The Government of India has committed
to provide full connectivity under special programme known as Pradhan Mantri Gram Sadak
Yojana (PMGSY).

GIS Based Approach: The various data items required for the development of a comprehensive
rural road planning and development can be broadly categorized under three categories

(1) Village data (the name and code number), demographic data (population) and Infrastructure
data

(2) Rural road data like (Road reference data, Road geometric details, Road pavement condition,
Terrain and soil type

(3) Map data The map at block level should be prepared at 1:50,000 scale (Location of
habitation/settlements, Boundaries Road Network Water bodies (ponds, lakes, etc) Rivers and
Irrigation canals

Database developed above has been applied in the Rupauli Block in Purnia District of Bihar
Figure :Optimum Network of Rupauli

• The Village and Road Information System (V&RIS) developed under GIS environment is very
much useful for problem identification, planning, allocation of resources and location of various
socio-economic facilities for an integral rural development

• It is also useful for creation, maintenance and accessing the GIS database

• Further using the information available at the road network layer, it will be easy to estimate the
construction cost of selected links

2. A case of Tunisia water supply system:

The system comprises an 18-reservoir network with both serial and parallel Interconnections, as
well as extensive water transfer and conveyance subsystems.

The primary purpose of this complex water resource system is to provide drinking water for the
country's urban and rural areas, irrigation and industrial water supply, flood and low flow
management, and hydropower generation.
GIS Applications For Water Supply

3. Sewage Treatment System Management Using GIS

Muskingum County GIS department) Ohio, has over 10,000 existing systems of record and over
300 new systems are installed each year.

Figure: Parcel information with land contours, roads, and soil types displayed.

•The GIS allows sanitarians to perform sewage treatment system reviews of existing systems in
minutes

•GIS as a visual tool, sanitarians can now have detailed phone consultations with property owners.

•It allows sanitarians to quickly utilize geographic information critical to decision making, and
eliminates the need to refer to cumbersome printed maps
39 Methods for evaluating point clutter: Random and Cluster

The science of geography attempts to explain and predict the spatial distribution of human
activity and physical features on the Earth’s surface. Geographers use spatial statistics as a
quantitative tool for explaining the geographic patterns of distribution.
The term spatial pattern often refers to various levels of spatial regularity, which often include
local clusters of points, global structure of a surface, etc.

Objectives of spatial analysis


1. To detect spatial patterns that cannot be detected by visual analysis.
2. To confirm whether spatial analysis found in visual analysis is significant.

Point pattern analysis:


Historically, Point Pattern Analysis was first noted in the works of botanists and ecologists in the
1930s (Chakravorty, 1995). However, in the intervening years, many different fields have also
started to use point pattern analysis, such as archeology, epidemiology, astronomy, and
criminology
Points are the basic most objects in G.I.S. they are used for representing zero-dimensional spatial
objects, i.e. locations in two – or – higher dimensional space. In G.I.S., however points are also
used for representing spatial objects including lines, polygons that are relatively smaller than the
study region.
The word ‘pattern’ in its purest sense refers to the location of occurrences relative to one another.
Pattern is independent of both scale and density. In map pattern analysis, it is the arrangement of
geometric objects which we study, that is, the points, lines and areas, which we use to represent
the real-world objects.
Points may represent cities, industrial sites, stores
or natural phenomenon such as plant or animal
species. Point pattern analysis deals with the
distribution of homogeneous points. In basic
point pattern analysis, we focus on the spatial
aspect of point distribution, neglecting their
attributes. Detecting a pattern in the distribution
of points may signal that a process is at work to
produce the arrangement.
Points can be distributed randomly, uniformly or clustered. In a random pattern there is no
apparent ordering. Some points may be clustered, some more remote and some at intermediate
distances. In a uniform pattern every point is as
far away from its neighbor as possible. In a
clustered pattern many points are concentrated
close together. If a point pattern represents a case of a disease, then a point cluster suggests that
the disease is an epidemic or that there is a source of water pollution near the point cluster.
Because of this, in point pattern analysis we use a quantitative measure that indicates the degree
of clustering.
In general, Point Pattern Analysis can be used to describe any type of incident data. For instance,
we may want to conduct “Hot Spot” analysis in order to better understand locations of crimes, or
else we may want to study breakouts of certain diseases to better see whether there is a pattern. In
both of these cases, Point Pattern Analysis can be of great help to institutions and policymakers in
their decisions on how to best allocate their scarce resources to different areas.
Criteria
In order to conduct Point Pattern Analysis, your data must meet five important criteria:
1. The pattern must be mapped on a plane, meaning that you will need both latitude and
longitude coordinates.
2. A study area must be selected and determined prior to the analysis.
3. The Point Data should not be a selected sample, but rather the entire set of data you seek
to analyze.
4. There should be a one-to-one correspondence between objects in the study area and
events in the pattern.
5. The Points must be true incidents with real spatial coordinates. For example, using the
centroids of a census tract would not be an especially useful process.

Patterns of points

Agglomeration or grouping:
Suppose theory suggests that a particular set of objects (plants, animals, people, towns, etc.)
tends to group or agglomerate in certain ways. Point Pattern Analysis is helpful in measuring
various characteristics of the groups (size, spacing, density, etc.) and leads to the testing of
hypotheses derived from theory. For example studies of animal behavior suggests that certain
types of spatial patterns help to verify theories of territoriality and social organization.

Diffusion:
Many theories have been proposed for the way individuals or ideas spread or spatially
multiply. Point pattern analysis can be helpful in verifying the existence of a diffusion process
and in calibrating rates of changes. An example comes from the study of the spread according
to principles based on the nearness of possible communities and their resistance to accepting
ideas. By the analysis of pattern at various moments in time and in different environments.
These notifications can be tested.

Competition:
It is often desirable to investigate spacing characteristics when it is suspected that competitive
forces are at work. Sometimes competition yields maximum spacing and other times
grouping. A well known example comes from the literature on town spacing. Spatial aspects
of economic theories of marketing can be tested by point pattern analysis.

Segregation or associations:
Hypotheses about the existence of spatial segregation in a many-species population of
individuals can be tested with point pattern analysis. Following urban rent theory we may
expect two kinds of land uses to repel each other. This expectation can be tested, as well as
theoretical expectations of an association among several land uses.

Pattern change
Many theoretical statements deal directly with the manner in which pattern change. For example
the birth and death process of plant and animal populations as well as human populations may
very well be studied by point pattern analysis. Interest might be in the rates of change in patterns.

Techniques to analyze point pattern data


When we are examining incident data, we often need to first get the coordinates of each incident
and determine the study area that we wish to use. For instance, if we were examining one hundred
robberies within a square mile, we would not want to use a study area of 5 square miles. Although
this may sound obvious, we also want to examine our data and make sure that we are not
estimating beyond areas, for which we have no data. In general, when we are examining areas to
see whether incidents are clustered we are using a null hypothesis that there is no clustering
present and that incidents are evenly spread throughout the study area. Sometimes, we may
specify that incidents are evenly clustered, controlling for certain variables, such as population
density.

In general there are three types of techniques:


1. Quadrat Count Methods
2. Kernel Density Estimation (sometimes called K-Means)
3. Nearest Neighbor Distance

Quadrat count methods


This method involves simply recording and counting the number of
events that occur in each quadrat.In general, it is important to
remember that large quadrants produce a very coarse description of the pattern, but as quadrat size
is reduced, many areas may become too small and some may contain no events at all.

.Limitations of Quadrat method


The quadrat method aggregates point data into raster data. This implies that the quadrat
method ignores a large amount of locational information in the observed point distribution.
Because of this, the quadrat method has several limitations to which we should pay attention.
1. The result depends on the cell size.

2. Theresult depends on the definition of the region in which points are distributed.

3. The quadrat method cannot distinguish some different distributions.

4. Those limitations are quite similar to those of the nearest neighbor distance method.
Consequently, one solution is to try various cell sizes and interpret the result as a function of
the spatial scale represented by the cell size.
Kernel density estimation
This method counts the incidents in an area (a kernel), centered at the location where the estimate
is made. This analysis is a partitioning technique, meaning that incidents are partitioned into a
number of different clusters. Oftentimes the user is able to specify the number of clusters. In some
forms of this analysis, all incidents, even the outliers, are assigned to one and only one group.
This method is very good for analyzing the point patterns to discover the Hot Spots.
1. This method provides us with a useful link to geographical data because it is able to transform
our data into a density surface.
2. Our choice of r, the kernel bandwidth strongly affects our density surface.
3. Also, we can weight these patterns with other data – such as density of populations and
unemployment rates.
4. In Dual Kernel Estimates, you are able to weight the estimates against another set of
incidents. For instance you might want to analyze the number of assaults against
establishments that are allowed to serve liquor.

Nearest neighborhood distance


To describe the degree of spatial clustering of a point distribution, nearest neighborhood distance
method uses the average distance from every point to its nearest neighbor point. The nearest
neighborhood distance is the ‘absolute’ measure of point clusters. It depends on the size of the
region in which points are distributed, so we cannot compare two sets of points distributed in
regions of different sizes.
The nearest neighbor distance defined above is an ‘absolute’ measure of point clusters. It depends
on the size of the region in which points are distributed, so we cannot compare two sets of points
distributed in regions of different sizes.
In general there are three different functions that users are able to employ in Nearest Neighbor
analyses:
G Function:
This is the simplest measure and is similar to the mean, however instead of summarizing with
a mean, the G function allows us to examine the cumulative frequency distribution of the
nearest neighbor distances. The shape of this function can tell us a lot about the way the
events are clustered in a point pattern. If events are clustered together, G increases rapidly at
short distances, and if events are evenly spaced, G increases slowly up to the distance at
which most events are spaced, and only then increases rapidly.
F Function:
Instead of accumulating the fraction of the nearest-neighbor distances between events, this
measure selects point locations anywhere in the study region at random, and the minimum
distance from them to any event in the pattern is determined.
K Function:
Imagine placing circles of a defined radius centered on the event in turn. Then, the number of
events inside the circle’s radius is totaled, and the mean count for all of the incidents is
totaled. This mean count is then divided by the overall study area. Because all of the incidents
are used, the K function provides more information about patterns and clusters then either G
or F.
Limitations of nearest neighbor distance method
1. We cannot distinguish all point distributions only by the nearest neighbor distance.
2. The result depends on the definition of S, the region in which points are distributed.
The process is responsible for the location of things such as human settlements, store-types, plants
and animals, and groups of plants and animals may be approximated by the Poisson process
model
40 Ground Control Points (GCP)

Ground Control Points (GCP):-

GCP’s refers to physical points on the ground whose ground positions are known with
respect to some horizontal coordinate system and/or vertical datum
Any point which is recognisable on both remotely sensed images, maps and aerial
photographs and which can be accurately located on each of these. This can then be used
as a means of reference between maps or, more commonly, between maps and digital
images. Often used in the geometric correction of remotely sensed images and surveying.

History:-

Ground control has been established through ground surveying techniques in the form of
triangulation, trilateration, traversing, and leveling.
Currently, the establishment of ground control is aided by the use of GPS procedures.

Use of ground control points:-

• When mutually identifiable on the ground and on a photograph, these can be used to
establish spatial position and orientation of a photograph relative to the ground at the
instant of exposure.
• They are normally used to associate projection coordinates with locations on a raw
(uncorrected) image; however, they can theoretically be used to relate locations in any
two georeferencing systems:
normally raw image coordinate and some projection system

Types of ground control points:-

1) Horizontal control points- positions are known planimetrically in some XY coordinate


systems
eg. State plane coordinate system
2) Vertical control points- have known elevations with respect to a level datum.
eg. Mean sea level
3) Both- a single point with known planimetric position and known elevation

Requirement of ground control point:-

Accurate ground control is essential to virtually all photogrammetric operations because


photogrammetric measurements can only be as reliable as the ground control on which they are
based.
Measurements on the photo can be accurately extrapolated to the ground only when we know the
location and orientation of the photograph relative to the ground at the instant of exposure.

Number of GCPs:-

A ground control point segment contains up to 256 ground control points. 45 points has to be
selected for each scene. GCP’s should be selected on panchromatic data.
GCPs distribution:-

• GCPs should be uniformly selected in the scene- select points near the edges of the image
and with even distribution in the image.
• GCPs selection should also respect terrain variations in the scene- select point at both
highest and lowest elevations.

GCPs locations:-

• Cultural features is usually best point to use as GCP. It covers road and railroads
intersections, river bridges, large low buildings (hangars, industrial buildings, etc),
airports etc.
• Line features should have well defined edges. GCP has always to be selected as a center
of the intersection. To use this intersection as GCP the two line features forming the
intersection have to cross with the angle larger the 60 degrees.
• Natural features are generally not preferred because of their irregular shapes. If an natural
feature has well defined edges, it may be used as a ground control point. It could be forest
boundaries, forest paths, forest clearings, river confluence, etc. During such points
selection it has to be taken into account that certain boundaries can be subject to
variations (forest, water bodies) and may be different on images and maps.
• Applying of local enhancements can be very useful for exact image position definition of
the GCP.

Survey of ground control points:-

1) After photography – ensuring that the points are identifiable on the image.
2) Before photography – control points may be premarked with artificial targets. Crosses
that contrast with the background land cover make ideal control point markers. Their size
is selected in accordance with the scale of the photography to be flown and their material
form can be quite variable.
eg. Markers painted on contrasting sheets of Masonite, plywood, or heavy cloth.

Overlapping areas:-

Identical GCPs should be selected in the areas where two or more Landsat scenes overlap. Such
points will have the same X,Y,Z coordinates and will differ only in corresponding image
coordinates.

Each GCP has to be accompanied with;-

• copy of the part of paper map showing selected point and its surrounding. or
• image chips from scanned map showing selected point and its surrounding. or
• written description or sketch of the point

Each ground control point has the following values associated with it:-

• Id: A unique numeric identifier for the control point. If it is negative, it is interpreted as
indicating that the point is a check point, and should not contribute to the transformation
model.
• System 1 X: The X coordinate in the first georeferencing system. This is normally a pixel
location in the image.
• System 1 Y: The Y coordinate in the first georeferencing system. This is normally a line
location in the image.
• System 1 Elevation: The elevation of the location in the first georeferencing system. This
is normally zero, and ignored by applications.
• System 2 X: The X coordinate in the second georeferencing system. This is normally a
location in projection coordinates.
• System 2 Y: The Y coordinate in the second georeferencing system. This is normally a
location in projection coordinates.
• System 2 Elevation: The elevation in the second georeferencing system. This should be
zero if it is not used.

Format:-

ID1, ID2, ID3, ID4, P, L, X, Y, Z

ID1 – L7 Scene Number


ID2 – Map Sheet Number
ID3 – GCP number (unique ID for each scene: e.g. 1,2,3,4,5, …..)
ID4 - GCP in overlapping area only – [overlapping scene number/GCP number]
P – Image Pixel (Column) Coordinate
L – Image Line (Row) Coordinate
X - X (Easting) Map Coordinate
Y – Y (Northing) Map Coordinate
Z - Elevation

Accuracy:-

Image coordinates: 0.1 pixel


Map coordinates: 5 m
Elevation: 5 m

Examples of some GCPs:-

Road intersection, Road and railroad intersection etc.

Flight Planning:-

• It is a process of producing a flight plan to describe a proposed aircraft flight.


• Work done prior to the acquisition and development of the photography.
• It is an art as well as a science.
• Adverse conditions can degrade the quality.
• A great deal of time, effort, and expense go into the planning and execution of a
photographic mission.
• Flight Inconsistencies: Since the aircraft is not an absolutely stable platform, all
photographs are not truly vertical. Several inconsistencies can be encountered.
• Flights are usually scheduled between 10 a.m. and 2 p.m. for maximum illumination and
minimum shadow.

Flight planning parameters:-

1) REQUIREMENTS OF A FLIGHT PLAN


Specifications
• camera and film requirements
• scale, flying height, endlap, sidelap
• tilt and crab tolerances, etc.

2) PURPOSE OF PHOTOGRAPHY
compilation of topographic maps in a stereoscopic plotting instrument

Requirements:
• Good Metric Quality Photos: Calibrated Cameras And Films (High-resolution)
• Favorable B/H Ratio

3) PHOTOGRAPHIC SCALE
• Scale of Final Map produced
• Contour interval
• Capabilities of the stereo-plotting instruments
• Enlargement ratio (usually 5x)
• Variation of scale due to ground elevation

4) FLYING HEIGHT
a) Given: focal length of a camera lens a compilation scale of the map:
Necessary flying height can be calculated.

b) Vertical accuracy in topographic mapping

C-Factor =
_flying height__
contour interval
Flying Height = Contour Interval x C-factor
C-Factor (of instruments): 750-250
5) COVERAGE: ENDLAP AND SIDELAP

6) COMPUTATION OF FLIGHT PLAN:

7) WEATHER CONDITIONS:
This is beyond the control of even the best planner. Only a few days of the year are ideal for aerial
photography. In order to take advantage of clear weather, commercial aerial photography firms
will fly many jobs in a single day, often at widely separated locations.

Flight Planning steps:-

• Determine project requirements.


• Project area, photo scale, end lap, side lap, direction of strips, coordinate system, camera
type and film type are determined.
• Base distance between two exposure stations along flight line and the distance between
two adjacent flight strips are calculated.
• The flight height is calculated for each strip by checking the terrain height of related strip.
• Coordinates of exposure stations are obtained.
• Photo scale is selected depending on the project purposes, such as the desired accuracy of
final product, purpose of use etc. The smallest possible photo scale is selected because of
reducing the number of models.
• One of the wide or normal lens camera types is selected according to characteristics of the
project area. In generally, wide lens cameras are preferred for the area that is smooth and
normal lens cameras are preferred for the area that is hilly or urban area.
• In photogrammetric applications, in order to obtain stereo model, end lap is assumed to be
60%. To cover the project area completely with stereo models along strips, side lap is
assumed to be 30%.
• The flight line directions are generally planed in East-West or North-South direction. The
flight line direction might be in different direction in some cases.

To eliminate most of the errors that might be occurred in the future, using software for
calculations and preparing all plans in digitally are considered as the best method.
Eg. Flight planning software.

Geometric aspects of the task of flight planning:-

Parameters needed for this task are


1. focal lenght of camera to be used
2. the film format size
3. photo scale desired
4. size of the area to be photographed
5. average elevation of the area to be photographed
6. overlap desired
7. side lap desired
8. ground speed of the aircraft to be used

Based on the above parameters the Mission Planner prepares computations and a flight map that
indicate to the flight crew:

1. flying height above datum from which the photos are to be taken
2. location,direction & number of flight lines to be made over the area to be photographed
3. time interval between exposures
4. number of exposures on each flight line
5. total number of exposures necessary for the mission

Flight plans are normally portrayed on a map for the flight crew. However, old photography, an
index mosaic, or even a satellite image may be used for this purpose.

Other important things for mission specification:-


-
1) mission timing
2) ground control requirements
3) camera calibration characteristics
4) film anf filter type
5) exposure conditions
6) scale tolerance
7) endlap, side lap
8) tilt &crab
9) photographic quality
10) product indexing
11) product delivery schedule

OVERLAP SIDELAP
The computations prerequisite to preparing a flight plan are given in the following
example:-

A study area is 10 km wide in the east-west direction & 16 km long the north-south direction. A
camera having a 152.4 mm focal length lens & a 230mm format is to be used. The desired photo
scale is 1:25,000 and the nominal endlap & sidelap are to be 60% & 30%. Beginning and ending
lines are to be positioned along the boundaries of the study area. The only map available for the
area is at a scale of 1:62,500. This map indicates that the average terrain elevation is 300m above
datum. Perform the computations necessary to develop a flight plan.

Solution:-

a) Use north-south flight lines to minimise the number of lines required and consequently
the number of aircraft turns and realignments necessary.
Flying in a cardinal direction often facilitates the identification of roads, section lines, and
other features that can be used for aligning the flight lines.

b) Find the flying height above terrain and add the mean site elevation to find flying height
above mean sea level:
H = f + havg = 0.1524m + 300 m = 4110 m
S 1/25000

c) Determine ground coverage per image from film format size and photo scale:
Coverage per photo = 0.23 m = 5750 m on a side
1/25000

d) Determine ground separation between photos on a line for 40% percent advance per photo
(i.e. 60% endlap):
0.40 x 5750 m = 2300 m between photo centers

e) Assuming an aircraft speed of 160 km/hr, the time between exposures is


2300 m / photo x 3600 sec/hr = 51.75 sec
160 km/ph 1000 m/km
use 51 second

f) Because the intervalometer can only be set in even seconds, the number is rounded off.
Considering 60% coverage recalculate the distance between photo centers, using reverse
of the above equation
51 sec/ photo x 160 km/hr x 1000m/km = 2267 m
3600 sec/hr

g) Compute the number of photos per 16 km line dividing this length by the photo advance.
Add one photo to each end round the number up to ensure coverage:
16000 m/line + 1 + 1 = 9.1 photos/line
2267 m/photo

Use 10 photos

h) If the flight lines are to have a sidelap of 30% of the coverage, they must be separated by
70% of the coverage:
0.70 x 5750 m coverage = 4025 m between flight lines

i) Find the number of flight lines required to cover the 10 km study area width by dividing
this width by distance between flight lines. This division gives number of spaces between
flight lines; add 1 to arrive at the number of lines
10000 m width + 1 = 3.48
4025 m/flight line

Use 4 numbers

The adjusted spacing between lines for using four lines is


10000 m width = 3333 m/ space
4-1 spaces

j) Find the spacing of flight lines on map of 1:62,500 scale:


3333 m x 1 = 53.3 mm
62,500

k) Find total number of photos needed:


10 photos/ line x 4 lines = 40 photos

Goal of flight planning:-

The main goal of planning is finding out the best fit flight lines and camera exposure stations. In
order to cover the project area with minimum number of models, flight lines and camera exposure
stations must be planed carefully. This is also important for a safety flight, reducing aerial survey
operational
costs and speeding up the preparation and execution of the photo missions and flight.
41Global Positioning System: Concept, Coordinates & Types
Concept:-

As the name suggests, global positioning system or GPS is used for tracking the position of a
respective object with the help of signals send by the object. Utilizing a constellation of at least 24
medium earth orbit satellites that transmit precise microwave signals, the system enables a GPS
receiver to determine its location, speed/direction and time. The GPS provides a continuous three
dimensional positioning 24 hrs a day throughout the world. The Global Positioning System (GPS)
is a burgeoning technology, which provides unequalled accuracy and flexibility of positioning for
navigation, surveying and GIS data capture. Developed by the United States Department of
Defense, it is officially named NAVSTAR GPS.

By positioning we understand the determination of stationary or moving objects. These can be


determined as follows:

1. In relation to a well-defined coordinate system, usually by three coordinate values and


2. In relation to other point, taking one point as the origin of a local coordinate system.

The first mode of positioning is known as point positioning, the second as relative positioning. If
the object to be positioned is stationary, it as static positioning. When the object is moving,it is
called kinematic positioning. Usually, the static positioning is used in surveying and the kinematic
position in navigation.

The GPS uses satellites and computers to compute positions anywhere on earth. The GPS is based
on satellite ranging. That means the position on the earth is determined by measuring the distance
from a group of satellites in space. The basic principle behind GPS are really simple, even though
the system employs some of the most high-tech equipment ever developed. In order to understand
GPS basics, the system can be categorised into

FIVE logical Steps

1. Triangulation from the satellite is the basis of the system.


2. To triangulate, the GPS measures the distance using the travel time of the radio message.
3. To measure travel time, the GPS need a very accurate clock.
4. Once the distance to a satellite is known, then the position of the satellite in space is
required to know.
5. As the GPS signal travels through the ionosphere and the earth's atmosphere, the signal is
delayed.

To compute a positions in three dimensions there should be four satellite measurements. The GPS
uses a trigonometric approach to calculate the positions. The GPS satellites are so high up that
their orbits are very predictable and each of the satellites is equipped with a very accurate atomic
clock.

Components of a GPS

The GPS is divided into three major components

The Control Segment


The Space Segments
The User Segment

The Control Segment

The Control Segment consists of five monitoring stations (Colorado Springs, Ascesion Island,
Diego Garcia, Hawaii, and Kwajalein Island). Three of the stations (Ascension, Diego Garcia, and
Kwajalein) serve as uplink installations, capable of transmitting data to the satellites, including
new ephemerides (satellite positions as a function of time), clock corrections, and other broadcast
message data, while Colorado Springs serves as the master control station. The Control Segment
is the sole responsibility of the Department of Defense(DOD) who undertakes construction,
launching, maintenance, and virtually constant performance monitoring of all GPS satellites.

The DOD monitoring stations track all GPS signals for use in controlling the satellites and
predicting their orbits. Meteorological data also are collected at the monitoring stations,
permitting the most accurate evaluation of tropospheric delays of GPS signals. Satellite tracking
data from the monitoring stations are transmitted to the master control station for processing. This
processing involves the computation of satellite ephemerides and satellite clock corrections. The
master station controls orbital corrections, when any satellite strays too far from its assigned
position, and necessary repositioning to compensate for unhealthy (not fully functioning)
satellites.

The Space Segment

The Space Segment consists of the Constellation of NAVASTAR earth orbiting satellites. The
current Defence Department plan calls for a full constellation of 24 Block II satellites (21
operational and 3 in-orbit spares). The satellites are arrayed in 6 orbital planes, inclined 55
degrees to the equator. They orbit at altitudes of about 12000, miles each, with orbital periods of
12 sidereal hours (i.e., determined by or from the stars), or approximately one half of the earth's
periods, approximately 12 hours of 3-D position fixes. The next block of satellites is called Block
IIR, and they will provide improved reliability and have a capacity of ranging between satellites,
which will increase the orbital accuracy. Each satellite contains four precise atomic clocks
(Rubidium and Cesium standards) and has a microprocessor on board for limited self-monitoring
and data processing. The satellites are equipped with thrusters which can be used to maintain or
modify their orbits.

The User Segment

The user segment is a total user and supplier community, both civilian and military. The User
Segment consists of all earth-based GPS receivers. Receivers vary greatly in size and complexity,
though the basic design is rather simple. The typical receiver is composed of an antenna and
preamplifier, radio signal microprocessor, control and display device, data recording unit, and
power supply. The GPS receiver decodes the timing signals from the 'visible' satellites (four or
more) and, having calculated their distances, computes its own latitude, longitude, elevation, and
time. This is a continuous process and generally the position is updated on a second-by-second
basis, output to the receiver display device and, if the receiver display device and, if the receiver
provides data capture capabilities, stored by the receiver-logging unit.
GPS Positioning Types

Absolute Positioning

The mode of positioning relies upon a single receiver station. It is also referred to as 'stand-alone'
GPS, because, unlike differential positioning, ranging is carried out strictly between the satellite
and the receiver station, not on a ground-based reference station that assists with the computation
of error corrections

Differential Positioning

Relative or Differential GPS carries the triangulation principles one step further, with a second
receiver at a known reference point. To further facilitate determination of a point's position,
relative to the known earth surface point, this configuration demands collection of an error-
correcting message from the reference receiver. Differential-mode positioning relies upon an
established control point. The reference station is placed on the control point, a triangulated
position, the control point coordinate. This allows for a correction factor to be calculated and
applied to other roving GPS units used in the same area and in the same time series.

GPS Co-ordinates:-

To start off, the receiver picks which C/A codes to listen for by PRN number, based on the
almanac information it has previously acquired. As it detects each satellite's signal, it identifies it
by its distinct C/A code pattern, then measures the time delay for each satellite. To do this, the
receiver produces an identical C/A sequence using the same seed number as the satellite. By
lining up the two sequences, the receiver can measure the delay and calculate the distance to the
satellite, called the pseudorange[12].
Overlapping pseudoranges, represented as curves, are modified
to yield the probable position

Next, the orbital position data, or ephemeris, from the Navigation Message is then downloaded to
calculate the satellite's precise position. A more-sensitive receiver will potentially acquire the
ephemeris data quicker than a less-sensitive receiver, especially in a noisy environment. Knowing
the position and the distance of a satellite indicates that the receiver is located somewhere on the
surface of an imaginary sphere centered on that satellite and whose radius is the distance to it.
Receivers can substitute altitude for one satellite, which the GPS receiver translates to a
pseudorange measured from the center of the earth.

Locations are calculated not in three-dimensional space, but in four-dimensional spacetime,


meaning a measure of the precise time-of-day is very important. The measured pseudoranges
from four satellites have already been determined with the receiver's internal clock, and thus have
an unknown amount of clock error. (The clock error or actual time does not matter in the initial
pseudorange calculation, because that is based on how much time has passed between reception of
each of the signals.The four-dimensional point that is equidistant from the pseudoranges is
calculated as a guess as to the receiver's location, and the factor used to adjust those pseudoranges
to intersect at that four-dimensional point gives a guess as to the receiver's clock offset. With each
guess, a geometric dilution of precision (GDOP) vector is calculated, based on the relative sky
positions of the satellites used. As more satellites are picked up, pseudoranges from more
combinations of four satellites can be processed to add more guesses to the location and clock
offset. The receiver then determines which combinations to use and how to calculate the estimated
position by determining the weighted average of these positions and clock offsets. After the final
location and time are calculated, the location is expressed in a specific coordinate system, e.g.
latitude/longitude, using the WGS 84 geodetic datum or a local system specific to a country.

Calculating a position with the P(Y) signal is generally similar in concept, assuming one can
decrypt it. The encryption is essentially a safety mechanism: if a signal can be successfully
decrypted, it is reasonable to assume it is a real signal being sent by a GPS satellite. In
comparison, civil receivers are highly vulnerable to spoofing since correctly formatted C/A
signals can be generated using readily available signal generators. RAIM features do not protect
against spoofing, since RAIM only checks the signals from a navigational perspective.

GPS cordinates can also be found out through individual websites of different companies. One is
required only to put his or her required destination’s address.

GPS Types:-
Handheld GPS: This GPS unit can be used while walking in strange towns, hiking, bicycling,
boating or marking landmarks. These units are also portable.

GPS Fishfinders: GPS technology can be used for fishing purposes whether by a weekend
hobbyist or a tournament angler, in fresh water or on a boat out in salt water. Fishing companies
are also increasingly using GPS for fish tracking.

Laptop GPS: There are several ways to put together a laptop GPS system. For use in an
automobile, there are GPS receivers that are made to connect to a laptop via a cable. This allows
the receiver to be placed near the windshield where it can gather satellite signals. The wired GPS
receivers for a laptop are the most inexpensive way to go.

GPS Watches: They are marketed as speed and distance systems for athletes - they do not
provide location information. The speed and distance systems are composed of two parts: a GPS
receiver and a watch that are wirelessly connected by a radio signal. The GPS receiver can be
worn on the arm or clipped to a belt. However some GPS Watches provide location information.

Bluetooth GPS: Bluetooth GPS is a combination that allows you to have a wireless GPS unit
display on a Bluetooth-enabled device such as a PDA or Pocket PC. Bluetooth GPS receivers
became available in late 2002. They can be used in an automobile or for hiking, among other uses.
Because they are wireless, they are powered by their own batteries.

GPS Palm: Most GPS Palms are smaller and some are less expensive. They also are quick and
simple to use. With Palms, there is a large choice of software programs and a wide range of
accessories.

GPS Cell Phones: The cell phone manufacturers need to incorporate a GPS receiver in a cell
phone. Advantages of this when used in an automobile are: 1) driving directions in your
automobile and; 2) the ability to use a cell phone as a handheld GPS for out-of-car purposes.

Golf GPS: There are two main ways one can have a golf GPS system. One is for the player to
have her or his own unit. The other is for the course to provide the system. From the golf course's
point of view, a GPS system that the course owns can be beneficial in many ways. An integrated
system can allow players to order food and drinks, allow two-way communications, and give
weather alerts. The system can even be a revenue generator by being an advertising medium.

GPS Maps: GPS maps provide point of interest coordinates, map images, route data, and track
data for GPS receivers. GPS map software is made for PDAs, laptops, desktop PCs, and specific
brands of GPS units. Many GPS maps have the capability to upload waypoints, routes, and tracks
to some GPS units.

GPS Tracking: With the growing popularity of GPS, there are many companies offering GPS
tracking systems for a wide variety of uses. Uses of GPS tracking systems are:

• Pets
• Wildlife
• Law enforcement
• Theft prevention
• Vehicle

GPS Vehicle Tracking: GPS vehicle tracking has many uses. Consumers can use these systems
to help recover their vehicle if it is stolen or keep tabs on a teenager in the family car. Commercial
users can improve efficiency and individuals using mass transit will be able to find out if their bus
or train is on time.

GPS PCMCIA: GPS and PCMCIA is a combination that allows laptops, PDAs, and the like to
function as GPS units. PCMCIA (Personal Computer Memory Card International Association)
was formed by several Integrated Circuit card manufacturers in 1989. Its purpose was to adopt an
industry standard for computers to accept peripheral devices such as add-on memory and
modems.

Marine GPS: Marine GPS navigation requires knowledge above and beyond land navigation.
Rocks, shallow water, and wrecks are common obstacles, and since fog often occurs on coastal
waters, it's critical to know where a person is. Recreational boaters usually stick close to land and
this may seem to be a clear advantage, but that is where the majority of hazards are. GPS gives
location, but one needs additional information like charts and a compass.

GPS PDA: A personal digital assistant (PDA) is one of those little hand-held computer gadgets
that people are using for a calendar, notes, calculator, mail and contacts. PDAs can become a
phone, a camera, and also a GPS receiver. Pocket PC GPS is a term that can be used to generally
refer to any personal digital assistant (PDA) that has GPS capability.

GPS Personal Locators: The GPS Personal Locators contains a GPS receiver. The device
transmits the GPS data over a GSM/GPRS (cell phone) system. Depending on the system, the
location data can be accessed on a website or transmitted to a control center, which then contacts
the appropriate people. Many of the systems that allow information access on a website let the
user see the GPS location in real-time on a moving map.

USB GPS Receivers: USB GPS receivers are devices that need to be connected to the USB port
of a laptop computer to function. This type of unit is sometimes called a "mouse GPS" as it
resembles a computer mouse.
42.Ground Truth & Accuracy Assessment
Ground Truth:-

In order to "anchor" the satellite measurements, we need to compare them to something we know.
One way to do this is by what we call "ground truth", which is one part of the calibration process.
This is where a person on the ground (or sometimes in an airplane) makes a measurement of the
same thing the satellite is trying to measure; at the same time the satellite is measuring it. The two
answers are then compared to help evaluate how well the satellite instrument is performing.
Usually we believe the ground truth more than the satellite, because we have more experience
making measurements on the ground and sometimes we can see what we are measuring with the
naked eye.

Ground truth is a term used in cartography, meteorology, analysis of aerial photographs, satellite
imagery and a range of other remote sensing techniques in which data are gathered at a distance.
Ground truth refers to information that is collected "on location". In remote sensing, this is
especially important in order to relate image data to real features and materials on the ground. The
collection of ground-truth data enables calibration of remote-sensing data, and aids in the
interpretation and analysis of what is being sensed.

More specifically, ground truth may refer to a process in which a pixel on a satellite image is
compared to what is there in reality (at the present time) in order to verify the contents of the pixel
on the image. In the case of a classified image, it allows supervised classification to help
determine the accuracy of the classification performed by the remote sensing software and
therefore minimize errors in the classification such as errors of commission and errors of
omission.

Other definitions of Ground truth (from various books):-

• Geophysical parameter data, measured or collected by other means than by the instrument
itself, used as correlative or calibration data for that instrument data. It includes data taken
on the ground or in the atmosphere. Ground truth data are another measurement of the
phenomenon of interest; they are not necessarily more "true" or more accurate than the
instrument data. Source: EPO
• The actual facts of a situation, without errors introduced by sensors or human perception
and judgment. For example, the actual location, orientation, and engine and gun state of
an M1A1 tank in a live simulation at a certain point in time is the ground truth that could
be used to check the same quantities in a corresponding virtual simulation.
• Data collected on the ground to verify mapping from remote sensing data such as air
photos or satellite imagery.
• To verify the correctness of remote sensing information by use of ancillary information
such as field studies.
• In cartography and analysis of aerial photographs and satellite imagery, the ground truth
is the facts that are found when a location is field checked -- that is, when people actually
visit the location on foot.

Ground truth is usually done on site, performing surface observations and measurements of
various properties of the features of the ground resolution cells that are being studied on the
remotely sensed digital image. It also involves taking geographic coordinates of the ground
resolution cell with GPS technology and comparing those with the coordinates of the pixel being
studied provided by the remote sensing software to understand and analyze the location errors and
how it may affect a particular study.

Ground truth is important in the initial supervised classification of an image. When the identity
and location of land cover types are know through a combination of field work, maps, and
personal experience these areas are known as training sites. The spectral characteristics of these
areas are used to train the remote sensing software using decision rules for classifying the rest of
the image. These decision rules such as Maximum Likelihood Classification, Parallelepiped
Classification, and Minimum Distance Classification offer different techniques to classify an
image. Additional ground truth sites allow the remote sensor to establish an error matrix which
validates the accuracy of the classification method used. Different classification methods may
have different percentages of error for a given classification project. It is important that the remote
sensor chooses a classification method that works best with the number of classifications used
while providing the least amount of error.

Pictures showing ground truth of a satellite image with respect to person measuring on ground

Ground truth is important in the initial supervised classification of an image. When the identity
and location of land cover types are know through a combination of field work, maps, and
personal experience these areas are known as training sites. The spectral characteristics of these
areas are used to train the remote sensing software using decision rules for classifying the rest of
the image. These decision rules such as Maximum Likelihood Classification, Parallelepiped
Classification, and Minimum Distance Classification offer different techniques to classify an
image. Additional ground truth sites allow the remote sensor to establish an error matrix which
validates the accuracy of the classification method used. Different classification methods may
have different percentages of error for a given classification project. It is important that the remote
sensor chooses a classification method that works best with the number of classifications used
while providing the least amount of error.

Ground Truth Data Acquisition:-

The Global Positioning System has developed into an efficient GIS data collection technology
which allows for users to compile their own data sets directly from the field as part of ‘ground
truthing’. Ground-truth surveys are essential components for the determination of accuracy
assessment for classified satellite imagery.

Ground truth also helps with atmospheric correction. Since images from satellites obviously have
to pass through the atmosphere, they can get distorted because of absorption in the atmosphere.
So ground truth can help fully identify objects in satellite photos.

Ways of measurement of Ground truth:

There are a number of ways to take ground truth measurements.

1. The first is what we call a "field campaign". This is where several scientists and
technicians take lots of equipment and set it up somewhere for a short but intense period
of measurement. We get a lot of information from field campaigns, but they are expensive
and only run for a short time.
2. Another source of ground truth is the on-going work of the National Weather Service.
They have a record of weather conditions stretching back for over 100 years.
Observations are made at regular intervals at offices around the country. These provide a
nice record but are not necessarily taken at the same time a satellite passes over the spot.
As clouds are very changeable, things can change completely in even a few minutes.
3. Another option for ground truth is S' COOL. Students at schools around the world can be
involved by making an observation within a few minutes of the time that a satellite views
their area.

2 Accuracy Assessment:-

INTRODUCTION:

Accuracy assessment is one of the most important considerations in the evaluation of remotely
sensed imagery. Too often, it is not done when imagery is produced. The accuracy of an image is
effected by many variables, including the spatial and spectral resolution of the hyper spectral
sensor, processing statistics used, types of classifications chosen, limits of detection of different
surface materials, suitability of reference spectra used for image analysis training, the type and
amount of ground truth data acquisition, and type of atmospheric correction algorithm applied to
the imagery.

Definition:
Comparison of a classification with ground-truth data to evaluate how well the classification
represents the real world.

Several kinds of errors - mainly those of "commission" or "omission" - are discussed as a basis for
setting up an accuracy assessment program. Accuracy itself is defined and the point is made that
much depends on just how any class, feature, or material being classified is meaningfully set forth
with proper descriptors. Two factors are important in achieving suitable (hopefully, high)
accuracy: spatial resolution (which influences the mixed pixel effect) and number of spectral
bands involved in the classification.

Errors of commission:-

An example of an error of commission is when certain pixels that are one thing, such as trees, are
classified as another thing, such as asphalt. Ground truthing ensures that the error matrices have a
higher accuracy percentage than would be the case if no pixels were ground truthed.

Errors of omission:-

An example of an error of omission is when pixels of a certain thing, for example maple trees, are
not classified as maple trees. The process of ground truthing helps to ensure that the pixel is
classified correctly and the error matrices are more accurate.

Accuracy Assessment:

Assessing the accuracy of a remote sensing output is one of the most important steps in
any classification exercise!!
Without an accuracy assessment the output or results is of little value.
There are a number of issues relevant to the generation and assessment of errors in a

.
classification
These include:
• The nature of the classification;
• Sample design and
• Assessment sample size.

• Nature of Classification:-

1. Class definition problems occur when trying to extract information from a image, such as
tree height, which is unrealistic. If this happens the error rate will increase.
2. A common problem is classifying remotely sensed data is to use inappropriate class
labels, such as cliff, lake or river all of which are landforms and not cover-types.
Similarly a common error is that of using class labels which define land-uses. These
features are commonly made up of several cover classes.
3. The final point here, in terms of the potential for generation of error is the mislabeling of
classes. The most obvious example of this is to label a training site water when in fact it is
something else. This will result in, at best a skewing of your class statistics if your
training site samples are sufficiently large, or at worst shifting the training statistics
entirely if your sites are relatively small.

This will result in, at best a skewing of your class statistics if your training site samples are
sufficiently large, or at worst shifting the training statistics entirely if your sites are relatively
small.

• Sample Design:-

1. In addition to being independent of the original training sample the sample used must be
of a design that will insure consistency and objectivity.
2. A number of sampling techniques can be used. Some of these include random, systematic,
and stratified random.
3. Of the three the systematic sample is the least useful. This approach to sampling may
result in a sample distribution which favors a particular class depending on the
distribution of the classes within the map
4. Only random sample designs can guarantee an unbiased sample.
5. The truly random strategy however may not yield a sample design that covers the entire
map area, and so may be less than ideal.
6. In many instances the stratified random sampling strategy is the most useful tool to use.
In this case the map area is stratified based on either a systematic breakdown followed by
a random sample design in each of the systematic subareas, or alternatively through the
application of a random sample within each of the map classes. The use of this approach
will ensure that one has an adequate cover for the entire map as well as generating a
sufficient number of samples for each of the classes on the map.
Types of Sampling:-

Stratified
Random Systematic Random

• Sample Size:

1. The size of the sample used must be sufficiently large to be statistically representative of
the map area. The number of points considered necessary varies, depending on the
method used to estimate.
2. What this means is that when using a systematic or random sample size, the number of
points are kept to a manageable number. Because the number of points contained within a
stratified area is usually high, that is greater than 10000; the number of samples used to
test the accuracy of the classes through a stratified random sample will be high as well, so
the cost for using a highly accurate sampling strategy is a large number of samples.
3. Once a classification has been sampled a contingency table (also referred to as an error
matrix or confusion matrix) is developed.
4. This table is used to properly analyze the validity of each class as well as the
classification as a whole.
5. In this way we can evaluate in more detail the efficacy of the classification.

Levels of Map Accuracy:-

Field Data:
Positional accuracy
attributes accuracy
measurement accuracy
Map Boundary
registration
scale
Classification
Correctly identified classes
mis-classification
un-identified classes
Contingency Matrix:-

• One way to assess accuracy is to go out in the field and observe the actual land class
at a sample of locations, and compare to the land classification it was assigned on the
thematic map.
• There are a number of ways to quantitatively express the amount of agreement
between the ground truth classes and the remote sensing classes.
• One way is to construct a confusion error matrix, alternatively called a error matrix
• One way is to construct a confusion error matrix, alternatively called a error matrix
• This is a row by column table, with as many rows as columns.
• Each row of the table is reserved for one of the information, or remote sensing classes
used by the classification algorithm.
• Each column displays the corresponding ground truth classes in an identical order

Classification & Accuracy Assessment:-

Classification error matrix (confusion matrix, contingency table)

Provides a comparison on a category-by-category basis of classification results vs. known


reference data.

Accuracy and Precision:-

Accuracy is an unquestionable goal in remote sensing, and precision is desirable when


you can get it, but in fact in remote sensing of natural resources, precision is hard to
obtain.
Much of remote sensing in these areas concerns how to get good results from coarse
classes.
Selecting training and test programs:

• Select trailing and test polygons prior to actually


classifying the data. Ideally assign polygons to training
and test datasets randomly.
• Select many polygons for each class
Distribute polygons throughout image to capture class
variability. Don’t sample more than twice in same patch
(use variogram if necessary)
• Sample homogeneous areas of class… avoid boundaries

Contingency Tables:-

For a simple example involving only 3 classes, consider The diagonal elements tally the number
of pixels classified correctly in each class.

An overall measure of classification accuracy:-

Total number of correct classifications


Total number of classifications
Which in this example amounts to 35+37+41 or 83%.
136
But just because 83% classifications were accurate overall, does not mean that each category was
successfully classified at that rate.

Users accuracy:-

1. A user of the imagery who is particularly interested in class A, say, might wish to know
what proportion of pixels assigned to class A were correctly assigned.

2. In this example 35 of the 39 pixels were correctly assigned to class A, and the user
accuracy in this category of 35/39=90%.
In general terms, for a particular category is user accuracy computed as:
Number of correct classifications
Total number of classifications in the category
which, for an error matrix set up with the row and column assignments as stated, is computed as
the user accuracy:-
Number in diagonal cell of error matrix
Number in row total
Evidently, user accuracy can be computed for each row.

Producers accuracy:-
1. Contrasted to user accuracy is producer accuracy, which has a slightly different
interpretation.
2. Producers accuracy is a measure of how much of the land in each category was classified
correctly.
It is found, for each class or category, as
Number in diagonal cell of error matrix
Number in row total

Accuracy assessment:-
So from this assessment we have three measures of accuracy which address subtly different
issues:
1. Overall accuracy: takes no account of source of error (errors of omission or commission)
2. User accuracy: measures the proportion of each TM class which is correct.
3. Producer accuracy: measures the proportion of the land base which is correctly classified.

Kappa coefficient:-
Another measure of map accuracy is the kappa coefficient, which is a measure of the proportional
(or percentage) improvement by the classifier over a purely random assignment to classes.
r r
r: # of rows, columns in error matrix
N xii − ( xir xic )
N: total # of observations in error matrix
Kˆ = i =1
r
i =1
xii: major diagonal element for class i
N − 2
( xir xic ) xir: total # of observations in row i
i =1 xic: total # of observations in column i

observed _ accuracy − chance _ agreement 0 ≤ Kˆ ≤ 1


Kˆ = K-Hat can also be
negative 1 − chance _ agreement
K-Hat Index:-

K-hat provides a basis for determining the statistical significance of any given
classification matrix
Quality of accuracy estimate depends on the quality of the info used as ground truth
(which has its own accuracy estimate)
For an error matrix with r rows, and hence the same number of columns,
Let A = the sum of r diagonal elements, which is the numerator in the computation of overall
accuracy.
Let B = sum of the r products (row total x column total).
Then

where N is the number of pixels in the error matrix (the sum of all r individual cell values).

For the above error matrix,


A = 35 + 37 + 41 = 113
B = (39 * 50) + (50 * 40) + (47 * 46) = 6112
N = 136

The error matrix, producer’s and user’s accuracy and KHAT value have become standard in
assessment of classification accuracy. However, if the error matrix is improperly generated by
poor reference data collection methods, then the assessment can be misleading. Therefore
sampling methods used for reference data should be reported in detail so that potential users can
judge whether there may be significant biases in the classification accuracy assessment.
43 Map projections, Concept, Classification, Use, Type,
Polyconic, Mercator, UTM

Map projections, concept, classification, use, type, polyconic, Mercator, UTM:-

Map projections, Concept and Use


A map projection is the manner in which the spherical surface of the Earth is represented on a
two-dimensional surface. This can be accomplished by direct geometric projection or by a
mathematically derived transformation. There are many kinds of projections, but all involve
transfer of the distinctive global patterns of parallels of latitude and meridians of longitude onto
an easily flattened, or developable, surface.

Construction of a map projection


The creation of a map projection involves three steps:
1. Selection of a model for the shape of the earth or planetary body (usually choosing
between a sphere or ellipsoid)
2. Transformation of geographic coordinates (longitude and latitude) to plane coordinates
(easting or x,y)
3. Reduction of the scale (it does not matter in what order the second and third steps are
performed)

Because the real earth's shape is irregular, information is lost in the first step, in which an
approximating, regular model is chosen. Reducing the scale may be considered to be part of
transforming geographic coordinates to plane coordinates.
Most map projections, both practically and theoretically, are not "projections" in any physical
sense. Rather, they depend on mathematical formulae that have no direct physical interpretation.
However, in understanding the concept of a map projection it is helpful to think of a globe with a
light source placed at some definite point with respect to it, projecting features of the globe onto a
surface. The following discussion of developable surfaces is based on that concept.
A surface that can be unfolded or unrolled into a flat plane or sheet without stretching, tearing or
shrinking is called a 'developable surface'. The cylinder, cone and of course the plane are all
developable surfaces. The sphere and ellipsoid are not developable surfaces. Any projection that
attempts to project a sphere (or an ellipsoid) on a flat sheet will have to distort the image (similar
to the impossibility of making a flat sheet from an orange peel).
One way of describing a projection is to project first from the earth's surface to a developable
surface such as a cylinder or cone, followed by the simple second step of unrolling the surface
into a plane. While the first step inevitably distorts some properties of the globe, the developable
surface can then be unfolded without further distortion.

Classification
A fundamental projection classification is based on type of projection surface onto which the
globe is conceptually projected. The projections are described in terms of placing a gigantic
surface in contact with the earth, followed by an implied scaling operation. These surfaces are
cylindrical (e.g., Mercator), conic (e.g., Albers), and azimuthal or plane (e.g., stereographic).
Many mathematical projections, however, do not neatly fit into any of these three conceptual
projection methods. Hence other peer categories have been described in the literature, such as
pseudoconic (meridians are arcs of circles), pseudocylindrical (meridians are straight lines),
pseudoazimuthal, retroazimuthal, and polyconic.
Another way to classify projections is through the properties they retain. Some of the more
common categories are:
• Direction preserving, called azimuthal (but only possible from the central point)
• Locally shape-preserving, called conformal or orthomorphic
• Area-preserving, called equal-area or equiareal or equivalent or authalic
• Distance preserving - equidistant (preserving distances between one or two points and
every other point)
• Shortest-route preserving - gnomonic projection

Projections by surface Cylindrical


The space-oblique Mercator projection was developed by the USGS for use in Landsat images.
The term "cylindrical projection" is used to refer to any projection in which meridians are mapped
to equally spaced vertical lines and circles of latitude (parallels) are mapped to horizontal lines
(or, mutatis mutandis, more generally, radial lines from a fixed point are mapped to equally
spaced parallel lines and concentric circles around it are mapped to perpendicular lines).

The mapping of meridians to vertical lines can be visualized by imagining a cylinder (of which
the axis coincides with the Earth's axis of rotation) wrapped around the Earth and then projecting
onto the cylinder, and subsequently unfolding the cylinder.

Unavoidably, all cylindrical projections have the same east-west stretching away from the equator
by a factor equal to the secant of the latitude, compared with the scale at the equator. The various
cylindrical projections can be described in terms of the north-south stretching:
North-south stretching is equal to the east-west stretching (secant(L)): The east-west scale
matches the north-south-scale: conformal cylindrical or Mercator; this distorts areas excessively
in high latitudes (see also transverse Mercator).
North-south stretching growing rapidly with latitude, even faster than east-west stretching
(secant(L))2: The cylindric perspective (= central cylindrical) projection; unsuitable because
distortion is even worse than in the Mercator projection.
North-south stretching grows with latitude, but less quickly than the east-west stretching: such as
the Miller cylindrical projection (secant(L*4/5)).
North-south distances neither stretched nor compressed (1): equidistant cylindrical or plate carrée.
North-south compression precisely the reciprocal of east-west stretching (cos(L)): equal-area
cylindrical (with many named specializations such as Gall-Peters or Gall orthographic,
Behrmann, and Lambert cylindrical equal-area). This divides north-south distances by a factor
equal to the secant of the latitude, preserving area but heavily distorting shapes.
In the first case (Mercator), the east-west scale always equals the north-south scale. In the second
case (central cylindrical), the north-south scale exceeds the east-west scale everywhere away from
the equator. Each remaining case has a pair of identical latitudes of opposite sign (or else the
equator) at which the east-west scale matches the north-south-scale.
Cylindrical projections map the whole Earth as a finite rectangle, except in the first two cases,
where the rectangle stretches infinitely tall while retaining constant width.
Conic Projections
Conical projections are accomplished by intersecting, or touching, a cone with the global surface
and mathematically projecting lines onto this developable surface. A tangent cone intersects the
global surface to form a circle. This is conceptually equivalent to the touching of a sweatband of a
hat on a head. On this line of intersection, termed the standard parallel, the map will be relatively
error-free and possess equidistance. Cones may also be secant, and intersect the global surface
forming two circles which will possess equidistance. Note that use of the word "secant", in this
instance, is only conceptual, not geometrically accurate. As with planar projections, the conical
aspect may be polar, equatorial, or oblique.

• Albers Equal Area Conic


o A conic projection that distorts scale and distance except along standard parallels.
Areas are proportional and directions are true in limited areas. Used in the United
States and other large countries with a larger east-west than north-south extent.
• Equidistant Conic
o Direction, area, and shape are distorted away from standard parallels. Used for
portrayals of areas near to, but on one side of, the equator
• Lambert Conformal Conic
o Area, and shape are distorted away from standard parallels. Directions are true in
limited areas. Used for maps of North America.
• Polyconic

The polyconic projection was used for most of the earlier USGS topographic quadrangles. The
projection is based on an infinite number of cones tangent to an infinite number of parallels. The
central meridian is straight. Other meridians are complex curves. The parallels are non-concentric
circles. Scale is true along each parallel and along the central meridian.

Azimuthal (projections onto a plane)


An azimuthal projection shows distances and directions accurately from the center point, but
distorts shapes and sizes elsewhere.
Azimuthal projections have the property that directions from a central point are preserved (and
hence, great circles through the central point are represented by straight lines on the map). Usually
these projections also have radial symmetry in the scales and hence in the distortions: map
distances from the central point are computed by a function r(d) of the true distance d,
independent of the angle; correspondingly, circles with the central point as center are mapped into
circles which have as center the central point on the map.

• Azimuthal Equidistant
o Azimuthal equidistant projections are sometimes used to show air-route
distances. Distances measured from the center are true. Distortion of other
properties increases away from the center point.
• Lambert Azimuthal Equal Area
o The Lambert azimuthal equal-area projection is sometimes used to map large
ocean areas. The central meridian is a straight line, others are curved. A straight
line drawn through the center point is on a great circle.
• Orthographic
o Orthographic projections are used for perspective views of hemispheres. Area and
shape are distorted. Distances are true along the equator and other parallels.
• Stereographic
o Stereographic projections are used for navigation in polar regions. Directions are
true from the center point and scale increases away from the center point as does
distortion in area and shape.
Polyconic
This projection was developed in 1820 by Ferdinand Hassler specifically for mapping the eastern
coast of the U.S. Polyconic projections are made up of an infinite number of conic projections
tangent to an infinite number of parallels. These conic projections are placed in relation to a
central meridian.
Polyconic projections compromise properties such as equal area and conformality, although the
central meridian is held true to scale. All parallels are arcs of circles, but not concentric. All
meridians, except the central meridian, are concave toward the central meridian. Parallels cross
the central meridian at equal intervals but move farther apart at the east and west peripheries.
Once again, values of false easting and northing are usually included so that NO negative values
occur in the rectangular coordinate system representing the desired region of the map projection.

Mercator
This famous cylindrical projection was originally designed by Flemish map maker Gerhardus
Mercator in 1569 to aid navigation. Meridians and parallels are straight lines which cross at right
angles. Angular relationships are preserved.
To preserve conformality, parallels are placed increasingly farther apart with increasing distance
from the equator. Due to extreme scale distortion in high latitudes, the projection is rarely
extended beyond 80 degrees North or South.
Rhumb lines, which show constant direction, are straight but do NOT represent the shortest path;
great circles are the shortest path.
Again, values of false easting and northing are usually included so that NO negative values occur
in the rectangular coordinate system representing the desired region of the map projection.
Universal Transverse Mercator (UTM) - a global system developed by the US Military
Services
This is an international plane (rectangular) coordinate system developed by the U.S. Army which
extends around the world from 84 degrees North to 80 degrees South. The world is divided into
60 zones, each covering six (6) degrees of longitude. Each zone extends three degrees eastward
and three degrees westward from its central meridian. Zones are numbered consecutively west to
east from the 180 degree meridian. From 84 degrees North and 80 degrees South to the respective
poles, the Universal Polar Stereographic (UPS) is used.
The Transverse Mercator projection is applied to each UTM zone. Transverse Mercator is a
transverse form of the Mercator cylindrical projection. The projection cylinder is rotated 90
degrees from the vertical (polar) axis and can be placed to intersect at a chosen central meridian.
The UTM system specifies the central meridian of each zone. With a separate projection for each
UTM zone, a high degree of accuracy is possible (maximum distortion of one part in 1,000 within
each zone).
44 Map Scale: Type and conversion, vertical exaggeration
The scale of a photograph expresses the mathematical relationship between a distance measured
on the photo and the corresponding horizontal distance measured in a ground coordinate system.
Unlike maps, which have a single constant scale, aerial photographs have a range of scales that
vary in proportion to elevation of the terrain involved. Once the scale of the photograph is known
at any particular elevation, ground distances at that elevation can be readily estimated from
corresponding photo distance measurements.

Photographic scale:-

One of the most fundamental and frequently used geometric characteristics of aerial photographs
is that of photographic scale. A photograph “ scale”, like a map scale is an expression that states
that one unit (any unit) of distance on a photograph represents a specific number of units of actual
ground distance. Scales may be expressed as unit equivalents, representative fractions or ratios. A
scale can vary from large to small on the basis of area covered. Same objects are smaller on a
smaller scale photograph than on a larger scale photograph. The most straight forward method
for determining photo scale is to measure the corresponding photo and ground distances between
any two points. This requires that the points be mutually identifiable on both the photo and a map.
The scale S is then computed as the ratio of the photo distance ‘d’ to the ground distance ‘D’

S = photo scale = photo distance / ground distance = d / D

Vertical photograph:-

For a vertical photograph taken over flat terrain, a scale is a function of the focal length ‘f’ of the
camera used to acquire the image and the flying height above the ground H’ from which the
image was taken. In general,

Scale = camera focal length / flying height above terrain = f / H’

The above mentioned equation is for a flat land which is practically rare.
So in order to calculate the scale for a general sloppy or rough terrain we have to reformulate the
above equation.

Exposure station L is at an air craft flying height H above some datum, or arbitrary base
elevation. The datum must frequently used is mean sea level if flying height H and the elevation
of the terrain h are known, we can determine H’ by subtracting H from h i.e
H’ = H - h .

223
If we consider terrain points A, O and B they are imaged at points a’, o’ and b’ on a negative film
and at a o and b on the positive print. We can derive an expression for photo scale by observing
similar triangles Lao and LAO, which are corresponding photo and ground distances i.e.

S= / = f / H’

Yet another way of expressing this equation is:

S = f / H-h

Often it is convenient to compute an average scale for an entire photograph so:

S (avg) = f / H – h (avg)
Where, h (avg) is the average elevation of the terrain shown in photograph.

224
Because of the nature of this projection, any variation in terrain elevation will result in scale
variation and displaced image positions.

Stereoscopy:-

It is the method to visualize an aerial photograph with the help of stereoscope. A stereoscopic
vision helps to obtain a 3 dimensional image of an aerial photograph.

Vertical Exaggeration :-

When we visualize the aerial photograph with the help of stereoscope, the image that we get is
influenced by several technical factors due to which the image perception varies. So in order to
rectify this error we calculate the vertical exaggeration.

Vertical exaggeration, VE = (B / H) x (h / b)

Where,
B = Air base
H = Flying height
b = Eye base
h = depth at which stereo model is perceived.

225
45. GIS- Definitions, Components, Objectives and hardware
& Software Requirement
Geographic Information system (GIS):-

A Geographic Information System (GIS) is a system for capturing, storing, analyzing and
managing data and associated attributes which are spatially referenced to the earth. It is a
computer system capable of integrating, storing, editing, analyzing, sharing, and displaying
geographically-referenced information. GIS is a tool that allows users to create interactive
queries (user created searches), analyze the spatial information, edit data, maps, and present the
results of all these operations.

It integrates common database operations such as query and statistical analysis with the unique
visualization and geographic analysis benefits offered by maps. These abilities distinguish GIS
from other information systems and make it valuable to a wide range of public and private
enterprises for explaining events, predicting outcomes, and planning strategies. (ESRI).
A typical GIS can be understood by the help of various definitions given below:-

1. A geographic information system (GIS) is a computer-based tool for mapping and


analyzing things that exist and events that happen on Earth
2. Burrough in 1986 defined GIS as, "Set of tools for collecting, storing, retrieving at will,
transforming and displaying spatial data from the real world for a particular set of
purposes"
3. Arnoff in 1989 defines GIS as, "a computer based system that provides four sets of
capabilities to handle geo-referenced data :
- data input
- data management (data storage and retrieval)
- manipulation and analysis
- data output. "
Hence GIS is looked upon as a tool to assist in decision-making and management of
attributes that needs to be analysed spatially

Three Views of a GIS

A GIS is most often associated with maps. A map, however, is only one way you can work with
geographic data in a GIS, and only one type of product generated by a GIS. This is important,
because it means that a GIS can provide a great deal more problem-solving capabilities than using
a simple mapping program or adding data to an online mapping tool (creating a "mash-up").

A GIS can be viewed in three ways:

1. The Database View: A GIS is a unique kind of database of the world—a geographic
database (geodatabase). It is an "Information System for Geography." Fundamentally, a
GIS is based on a structured database that describes the world in geographic terms.
2. The Map View: A GIS is a set of intelligent maps and other views that show features and
feature relationships on the earth's surface. Maps of the underlying geographic
information can be constructed and used as "windows into the database" to support
queries, analysis, and editing of the information. This is called geo visualization.

3. The Model View: A GIS is a set of information transformation tools that derive new
geographic datasets from existing datasets. These geo processing functions take
information from existing datasets, apply analytic functions, and write results into new
derived datasets.

In other words, by combining data and applying some analytic rules, one can create a
model that helps answer the question for analysis.
Components of GIS

GIS is a real application, including the hardware, data, software and people needed to solve a
problem.

GIS hardware: It is like any other computer, keyboard, display monitor (screen), cables, Internet
connection with some extra components perhaps maps come on big bits of paper

- need specially big printers and plotters to make map output from GIS
- need specially big devices to scan and input data from maps to GIS
- digitizers, scanners

But not all GIS will need these, what is important is the kind of information that is stored,
information about what is where, the contents of maps and images. A GIS includes the tools to do
things with information like the special functions that work on geographic information that
functions to display on the screen, edit, change, transform, and measure distances. Keeping the
combine maps of the area together is simple, but functions can be much more sophisticated like
- keep inventories of what is where
- manage properties, facilities
- judge the suitability of areas for different purposes
- help users make decisions about places, to plan
- make predictions about the future etc.
All these sophisticated functions require human expertise as well for the interpretation and
management of data.

GIS Software:
The functions that a GIS can perform are part of its software .This software will probably have
been supplied by a company that specializes in GIS. The price of the software may be anywhere
from $50 to $50,000.

GIS Software is classified in three major categories.


Software Properties Example Companies

Open Based The source code is freely GeoTools The other companies
Software available and is licensed so GeoTools is an open source, Java GIS are Fmaps, EDBS
that it can be freely toolkit for developing standards compliant Reader, GMT,
distributed and modified as solutions.. GeoTools aims to support Open Intergraph WMS
long as appropriate credit is GIS and other relevant standards as they are Viewer
provided to the developers. developed.

Server based Server GIS is used for many GIServer “The GIServer is an initiative The other companies
Software kinds of centrally hosted GIS from the inova GIS project that gives free are MapServer etc.
computing. access to GIS functions through the Internet.

Desktop Software Licensed Software and source ESRI is software company Available The other companies
code is not freely available software includes ArcGIS, ArcSDE, are EPPL7, Ilwis,
ArcIMS, and ArcWeb services. Known best Intergraph, Manifold
for the ESRI shapefiles file format, which is etc.
often used to supply or transfer GIS data.

There are many types of paid GIS Software and many types of Freeware GIS Software available
in market for different purpose which are given below:
Paid GIS Software:

1) AGISMap 10) Keigan Sys. Inc


2) Autodesk 11) MapGrafix
3) DeLorme 12) Manifold
4) EPPL7 13) Mapinfo
5) ESRI 14) Maptitude
6) Geo/SQL 15) MetaMap
7) Idrisi32 16) Myworld
8) Ilwis 17) Terrain Tools
9) Intergraph 18) TNT Products

Freeware GIS Software are:

1) ArcExplorer (4) GRASS


2) FlowMap (5) SPRING
3) GMT Mapping Tool (6) TNTLite
Some of the information of software is given below:

1) AGIS

• AGIS is a user friendly mapping and simple geographic information system.


• A multi-document interface allows the concurrent display of a number of individual
maps, each composed of a number of map and data layers.
• Maps and data can be provided by the user in the form of text files either created by other
programs, or typed using a text editor.
• All control files used by AGIS are ASCII text, so it is easier to create such files using
other systems.
• The program has sufficient control dialogs built in to create and edit all control
information without requiring the user to understand the structure of these files.
• High quality map images can be copied to the clipboard and pasted into popular packages
such as Microsoft Word, or save directly to your hard disk as jpeg or bmp files.
• The latitude and longitude of the cursor is displayed at all times when over a displayed
map, and cursor types are selected to allow zooming and measuring
• Map files are stored efficiently as binary files after conversion from a simple text import
format.
• On importing a map, the projection type, projection origin, minimum resolution and map
boundaries can be specified by the user.
• To use the map serving capability, you web server needs to be running on Windows 95,
98, ME, NT, 2000 or XP.

2) AUTODESK

• Autodesk has a series of software applications designed to meet GIS needs in a variety of
areas.
• Autodesk Map- delivers specialized functionality for creating, maintaining, and producing
maps and geographic data.
• Built on AutoCAD® 2000i, AutoCAD Map 2000i adds new Internet tools to keep you in
touch with your colleagues, customers, and data.
• Autodesk Mapguide- get live, interactive access to your designs, maps, and data from the
Internet, your intranet, or in the field.
• Autodesk MapGuide® Release 5 software makes it all possible.
Platforms: UNIX, PC,Macintosh, WinCE, and Palm devices.

3) DeLorme

• DeLorme is the producers of XMap, a GIS application "with 80% of the functionality
found in a traditional GIS at 15% of the cost".
• Performs functions such as geocoding, image rectification, 3D visualization and
coordinate transformation.
• XMap 4.5 is powerful and scalable mapping software that provides users with easy-to-use
and affordable digital mapping tools.
• Add-on software modules expand capabilities further encompassing image registration
and aerial photography mission planning.
• A wide variety of DeLorme data and imagery sets are available that work seamlessly with
XMap 4.5.
• The platform’s data structure enables XMap to support OpenGIS® and interoperability
between most data formats.
• Affordable and feature packed, XMap 4.5 provides users with import tools, data
management flexibility, split-screen viewing, advanced draw and print capabilities, and a
variety of different DeLorme datasets from which to choose.
• Data that is analyzed within the XMap/GIS Editor package can be viewed within XMap
4.5.
• XMap 4.5 is a flexible, comprehensive tool designed to meet the spatial data needs of
professionals within a variety of industries.

1. Utilities
2. Civil Engineering
3. Public Safety
4. Government
5. Land Management
6. Transportation
7. Real Estate

4) EPPL7

• EPPL is a raster-based GIS package.


• The program can be used to create, manage, analyze and display spatial (geographic)
data; and to create and work with tabular and attribute data.
• EPPL7 also allows users to digitize vectors, convert vector data to raster format and
integrate the vector and raster data for on-screen display and print-outs.
• EPPL7 also provides many routines for converting vector and raster data to and from a
standard format
• It can reclassify data, generate two-layer models, perform cross-tab analysis with up to
five layers, import "point" data using the Public Land Survey or GPS points, model
uniform and directional buffers, and interpolate a continuous surface from point data.
• It offers a wide range of practical operations, including:

viewshed
Landscape analysis terrain visualization
slope analysis
aspect analysis
averaging
evaluating
Neighborhood operations clustering or grouping cells
distance mapping
buffers
reclassification
Overlay functions logical evaluation of overlapping themes
mosaics
vector and raster data
Data conversion interpolating point and line data
tabular data to raster files
file format conversion
file transformation
Utility and file management
rescaling and resizing files
using windows

5) ESRI

• Environmental Systems Research Incorporated has been creating GIS software for over
30 years.
• Recognized as the leader in GIS software, it's been estimated that about seventy percent
of GIS users use ESRI products. ESRI overhauled their software packages into an
interoperable model called ArcGIS.
• The three main GIS software packages available from ESRI are: ArcInfo/ArcView 8.x,
ArcView 3.x and ArcIMS. editing and data manipulation capabilities
• ArcInfo was the first software product available from ESRI and is also the most
comprehensive analytical and mapping software offered by ESRI.
• ArcView 3.x is the original desktop solution offered by ESRI as an out-of-the box
desktop mapping software product for the end user.
• More user friendly than ArcInfo, ArcView's editing and data manipulation capabilities are
extended with each update.
• In addition, ESRI has developed plug-ins called extensions which add to the functionality
of ArcView. ArcIMS is a relatively young product from ESRI designed to create out-of-
the-box web mapping but also allowing developers to create more involved, custom
browser-based mapping applications.
• A Visual Basic component, Map Objects allows programmers to build cartographic
applications from the ground up.
• Platforms: UNIX, Windows OS

6) Geo/SQL

• It is a low cost, full function Microsoft Windows based GIS.


• It has the power to capture large amounts of data and manipulate it into usable
information, produces excellent visual presentations, and is low cost.
• It provides the power to create maps, integrate information, visualize scenarios, solve
complicated problems, present powerful ideas, and develop effective solutions like never
before.
• Works with many GIS data formats as well as Oracle Spatial Cartridge.
• At the desk/top level, Geo/SQL works with all major GIS formats including ESRI,
Mapinfo, Autodesk as well as Oracle, and ODBC.
• At the enterprise level, Geo/SQL not only uses these popular GIS formats, but also
provides seamless spatial data using SQL database technology such as Oracle Cartridge
and Geo/SQL Spatial SQL.
• Using Microsoft Windows as a foundation, any SQL database which supports ODBC can
be used to manage the large volumes of geographic and textual data required for the most
demanding applications.
• GIS is a tool used by individuals and organizations, schools, governments, and businesses
seeking innovative ways to solve a variety of problems.

7) IDRISI

• Kilimanjaro is a sophisticated GIS and Image Processing software solution that includes
over 200 modules for the analysis and display of digital spatial information.
• IDRISI is the industry leader in raster analytical functionality covering the full spectrum
of GIS and Remote Sensing needs from database query, to spatial modeling, to image
enhancement and classification.
• IDRISI Kilimanjaro uses the latest object-oriented development tools, bringing true
research power to the NT workstation (NT) and desktop.
• TIN interpolation, Kriging and conditional simulation are also offered.
• Spatial Analysis Remote Sensing• Natural Resource and Ecology and Conservation
Environmental Management Land Use Planning
• Special facilities are included for environmental modeling and natural resource
management, including change and time series analysis, land change prediction, multi-
criteria and multi-objective decision support, uncertainty analysis and simulation
modeling.

8) ILWIS

• Ilwis is a GIS and Remote Sensing package offering orthorectification, geostatistics and
overlay capabilities.
• ILWIS integrates image, vector and thematic data in one unique and powerful package on
the desktop.
• ILWIS delivers a wide range of feautures including import/export, digitizing, editing,
analysis and display of data as well as production of quality maps.
• The main features of Ilwis are:
• Integrated raster and vector design.
• Import and export of widely used data formats On-screen and tablet digitizing.
• Comprehensive set of image processing tools Orthophoto, image georeferencing,
transformation and mosaicing
• Advanced modeling and spatial data analysis
• 3D visualization with interactive editing for optimal view findings Rich projection and
coordinate system library Geo-statisitical analyses, with Kriging for improved
interpolation
GIS Reader

BIBLIOGRAPHY AND WEBLIOGRAPHY

Books:

Dr. M K Sharma - REMOTE SENSING AND FOREST SURVEYS.


M. Anji Reddy – REMOTE SENSING AND GEOGRAPHICAL INFORMATION
SYSTEMS.
REMOTE SENSING AND IMAGE INTERPRETATION, Thomas N Lillesand, Ralph W
Kiefer, Jonathan W Chipman.
GIS: FUNDAMENTALS, APPLICATIONS AND IMPLEMENATAIONS – Dr. K
Elangonavan
REMOTE SENSING: PRINCIPLES AND APPLICATIONs –Dr. B.C. Panda
Sloggett, D.R.: ‘SATELLITE DATA – PROCESSING, ARCHIVING AND
DISSEMINATION’. Vol. 1: Applications and infrastructure, Ellis Howard Ltd.
Barrett, E.C. and Curtis, L.F. (1976): INTRODUCTION TO ENVIRONMENTAL REMOTE
SENSING’. Second edition, Chapman and Hall.
Lillisand, Thomas M., Kiefer, Ralph W. and Chipman, Jonathan W. (2004): ‘REMOTE
SENSING AND IMAGE INTERPRETATION’, Fifth edition, Wiley Publication.
Courtesy of the Wisconsin State Cartographer’s Office
Embley, D., Nagy, G.: “A MULTI-LAYERED APPROACH TO QUERY PROCESSING IN
GEOGRAPHIC INFORMATION SYSTEMS”, Geographic Database Management Systems,
Workshop Proceedings, Capri, Italy; Springer-Verlag, Berlin, pp 293-317, 1992.
Frank, A.: “BEYOND QUERY LANGUAGES FOR GEOGRAPHIC DATABASES: DATA
CUBES AND MAPS”, Geographic Database Management Systems, Workshop Proceedings,
Capri, Italy; Springer-Verlag, Berlin, pp 293-317, 1992.
Langran, G.: “TIME IN GEOGRAPHIC INFORMATION SYSTEMS”, Taylor & Francis,
London, pp 27-44, 1992.
Panda, B.C.: “REMOTE SENSING: PRINCIPALS AND APPLICATIONS”, Viva Book Pvt
LTD, First Edition, pp185-211, 2005.
Smith, T., Ramakrishnan, R., Voisard, A.: “OBJECT-BASED DATA MODEL AND
DEDUCTIVE LANGUAGE FOR SPATIO-TEMPORAL DATABASE APPLICATIONS”,
Geographic Database Management Systems, Workshop Proceedings, Capri, Italy; Springer-
Verlag, Berlin, pp 79-102, 1992.
Spencer, J., Frizzelle, B., Page, P., and Vogler, J.: ”GLOBAL POSITIONING SYSTEMS: A
FIELD GUIDE FOR SOCIAL SCIENCES”, Blackwell Publishing LTD., 350 Mainstreet,
Malden, Massachusetts, pp 89-112, 2003.
“AN INTRODUCTION TO GIS” by Ian Heywood
ESRI White Paper
Chartrand, Gary and Oellermann, Ortrud R. “APPLIED AND ALGORITHMIC GRAPH
THEORY”. McGraw-Hill. 1993.
Winyoopradist, Soottipong and Siangsuebchart, Songkorn. “NETWORK ANALYSIS FOR
VARIABLE TRAFFIC SPEED”. ESRI User Conference 1999 Proceedings, 1999.
Siangsuebchart, Songkorn. “A DESIGN AND DEVELOPMENT OF SOFTWARE TOOLS
FOR IMPLEMENTATION OF ROUTE DATA STRUCTURE IN GEOGRAPHIC
INFORMATION SYSTEM.” Department of Computer Engineering, Chulalongkorn University,
Thailand. 1998.
MODELS OF SPATIAL PROCESSES: AN APPROACH TO THE STUDY OF POINT
LINE AND AREA PATTERNS ( Getis, Arthur Boots)
REMOTE SENSING – MODELS AND METHODS FOR IMAGE PROCESSING by
Robert.A. Schowengerdt, Second Edition (P.202 -208)
REMOTE SENSING – PRINCIPLES AND INTERPRETATION by Floyd. F. Sabins. JR,
Second Edition (P.246 – 251)

234
GIS Reader

Lillesand, Kiefer & Chipman. REMOTE SENSING & IMAGE INTERPRETATION. Wiley
publication.
Sabins Jr., F. F. 1987. REMOTE SENSING; PRINCIPLES AND INTERPRETATION. New
York: W. H. Freeman
M.Anji Reddy. TEXTBOOK OF REMOTE SENSING AND GEOGRAPHICAL
INFORMATION SYSTEMS. B.S.Publications
J.Ronald Eastman. GUIDE TO GIS AND IMAGE PROCESSING. IDRISI Production
David L.Verbyla, SATELLITE REMOTE SENSING OF NATURAL RESOURCES, 2005.
Michael Lefsky, PRESENTATION ON ACCURACY ASSESSMENT, 2006.
Globe, TUTORIAL ON ACCURACY ASSESSMENT, 2005.

Websites:

http://www.itc.nl/~bakker/earsel/9806b.html - Wim Bakker - ITC 25 May 1998


Microsoft Encarta 2007 – msn.encarta.com
http://asia.spaceref.com/news/viewpr.html?pid=6367
http://earthobservatory.nasa.gov
http://chesapeake.towson.edu/data/all_electro.asp
http://paces.geo.utep.edu/nasa_paces/basic.html
www.wikipedia.org
www.nrsa.gov.in
www.crisp.nus.edu.sg
www.eduspace.esa.int
http://www.physics.uwstout.edu/wx/wxsat/measure.htm
http://www.wikepedia.com
http://www.oso.noaa.gov/index.htm
http://www.nasa.gov
www.eurimage.com
www.esri.com
www.gisdevelopment.org
www.wikipedia.com
www.wikepedia.org
www.blogs.msdn.com
http://www.gisdevelopment.net/magazine/index.htm
http://www.maps-gps-info.com/gp.html
http://www.webopedia.com/TERM/G/GPS.html
www.gisdevelopment.net
www.esri.com
www.wikipedia.com
www.ua.t.u-tokyo.ac.jp
www.spatialanalysisonline.com
www.pop.psu.edu
www.cas.sc.edu/geog/rslab
www.nrcan-rncan.gc.ca
http://www.fas.org/irp/imint/docs/rst/Front/tofc.html,
www.cinnic.org/CINNIC-figures/,
http://www.cast.uark.edu/local/brandon_thesis/Chapter_IV_gps.htm,

235

S-ar putea să vă placă și