Sunteți pe pagina 1din 43

CHAPTER II

DIGITAL IMAGE PROCESSING

CHAPTER II
DIGITAL IMAGE PROCESSING
2.1 INTRODUCTION
Image processing involves manipulation and interpretation of digital images
with the aid of the computer. This is extremely a broad subject and it often involves
procedures which can be mathematically very complex. A digital image is fed into the
computer and stored as digital numbers, each value corresponding to a particular
geographic position in the form of rows and columns. Computer is programmed to
insert these data into an equation or series of equations and store the results
computation for each pixel. The results form new digital image that may be displayed
or recorded in pictorial format or further manipulated by additional programme. The
computer operation on the image data can be categorized into four broad types:
Image rectification and restoration, Image Enhancement, Image classification, and
Data merging and GIS integration.
Image Processing for the Study Area
The terrain parameters can be accurately identified from digitally processed
image. As the study area is a complex hilly terrain, it has been decided to carryout
digital image processing and get the better interpretation capability for LHZM. For
image processing, PAN merged Geocoded IRS P6 L4 MX A01 based digital
scenes (Lat. 110 40'00" to 120 00'00" and Long. 7805'00" to 780 25'00") of Path 102
and Row 123 & 124 of the year 2007 satellite data was used (Fig.2.1).
The image processing was performed using ERDAS Imagine 9.1 software on
windows environment. Seven well-distinguishable Ground Control Points (GCPs) on
26

SATELLITE FOR THE STUDY AREA

KM
ItiIO
3

Fig 2.1

78 '5'0'1

f 101rE

78' 1:70'1

78 200E

SRTM DATA

2"01211

11'55011-

11 '501:01

150011

1' 45'011

A Location

SR TM
Value
Hiq h 1635
11 '40011

1'40'0'11

1.25 2 5

ormr

785O

7.5

10

---ommg Kilo meters

78 -10'0't

Lor.4 3117

78 151:1-E

Fig. - 2.2

78 20'0"E

the SOI toposheets were identified/registered on the satellite image and


rectified/reoriented. The resultant rectified satellite image had a +0.45 RMSE (Root
Mean Square Error). The satellite image was transformed with a standard default
5.8m resolution and projected to polyconic coordinate system using nearestneighbourhood resampling method, and radiometrically corrected using histogram
equalization tool in ERDAS Imagine software. From the entire scene, the whole the
study area (Shevaroy hills) was extracted using subset option in the software and
FCC (False Colour Composite) displayed with the band combination of 4,3,2,1 on
the computer screen was generated.
After completing the pre-processing of the image, such FCC image was then,
enhanced radiometrically, spatially and spectrally to improve the visual
interpretability of the image for various applications. Then, such enhanced image
was subjected to visual interpretation for preparing various thematic maps related to
landslip/landslide contributing/influencing variables.
The processed FCC was printed with 600-dpi resolution in 42" size HP plotter
on 1:50,000 scale for field check. Based on the preliminary analysis, i.e. IRS P6
LISS IV hard copy ground-truthing, an interpretation key was prepared based on the
tone, texture, pattern and association. GPS (Global Positioning System) reading and
other collateral and necessary corrections and modifications were incorporated in the
preliminary interpretation, considering the topography, and landform types.
DEM
One of the most important data types for landslide hazard assessment is a
Digital Elevation Model (DEM), also called a Digital Terrain Model (DTM). A DEM
can be generated using photogrammetrical methods from airphotos or satellite
27

images, or can be generated through contour interpolation. DEM's can be in raster


format, in which each cell display the altitude of the central cell, or in vector format in
the form of Triangulated Irregular Networks (TIN). DEMs have a wide range of
applications. They can be used to generate slope direction maps, and slope gradient
maps. For raster DEMs directional filters are used to calculate these features. Other
applications are the generation of the slope convexity maps, slope length maps, and
the automatic detection of drainage and catchment areas. DEMs are also used as a
basis for hydrological modeling.
2.1.1 Image Rectification and Restoration
The operations aim to correct distorted or degraded image data to create a
more faithful representation of original scene. Some of the corrections are: raw
image initially corrected for geometric distortions, calibration of data radiometrically,
and elimination of noise present in the data.
Hence, the restoration of any particular image is highly dependent on the
characteristics of the sensor that is used. Image rectification and restoration
procedures are known as Image Pre-processing.
2.1.2 Image Enhancement
The techniques are applied to increase the visual interpretation of features in
the scene. There are no simple rules for producing a single 'Best" image for a
particular application. Enhancement procedures can be applied in the radiometric
domain or spatial domain or spectral domain. In the radiometric domain, it is referred
to as contrast enhancement; in the spatial domain, it is referred to as convolution

28

3D MODEL OF THE STUDY AREA

0 20 40

80

120

160
Vilorne.ters

Fig. - 2.5c

DEM
Wrapped with IRS-P6 LISS 4 DATA

0 20 40

80

120

160
Kilometers

Fig. - 2.56

78' 8'0"E

78' 133 0"E

78' 19'0"E

TRIANGULATED IRREGULAR NETWORK (TIN)


1\1

78'8'0"E

78' 1;30"E
Fig 2 57

7819'0E

filtering/edge enhancement/fourier analysis; in the spectral domain, it is referred to


as band ratioing/indices, principal component analysis, etc.
Image enhancement techniques are used to improve the visual interpretability
of a raw input image by increasing the apparent distinctions between the features in
the scene. Human mind is capable of identifying obscure or subtle features, but, the
eye is poor in discriminating very slight radiometric/spectral difference. Computer
enhancement aims to visually amplify these slight differences and to make them
readily observable. The techniques, we use will depend upon our data, objective,
expectations and background.
Enhancement techniques are categorized as either point or local operation.
Point operation modifies the brightness values of each pixel independently, whereas,
local operation modifies the value of pixels based on the neighboring brightness
values.
2.1.3 Image Classification
The objective is to replace visual analysis with quantitative techniques for
identification of features in the scene. The procedure involves the analysis of multispectral image data and application of statistically based decision rules for
determining the land cover identity of each pixel in an image. When decision is made
on spectral radiance, then the classification is referred as 'spectral pattern
recognition'. If the decision rule is made on geometrical shape, size and pattern,
then, it falls into 'spatial pattern recognition'. Both bring out the 'themes' or several
land covers. Multi-spectral classification can be carried out through 'supervised' or
'unsupervised' approaches.

29

2.1.4 Data merging and GIS Integration


This involves combination of image data for a given geographic area with
other geographically referenced data sets for the same area, commonly referred as
Geographic Information System (GIS). For example, we can combine image data
with soil, topography, land use and assessment information.
All these procedures are inter-related. Restoration and noise removal can be
considered as enhancement procedure. Enhancement procedures can be used to
improve the efficiency of classification. Similarly, data merging can be used in image
classification for accuracy.
2.2 STATISTICAL ANALYSIS IN IMAGE PROCESSING
Once the digital data are imported, it is important to caluclate the statistics
(both univariate and multivariate) of the multispectral data as this will greatly help in
selecting the different image enhancement techniques. The calculation of image
statistics involves computing the minimum and maximum value for each band of
imagery. The standard deviation, mean, a variance covariance matrix, a correlation
matrix and frequencies of brightness values ('DN' values) in each band, which are
used to. produce histograms. Histogram is the most important graphical
representaion of image data. This will give a clear picture about the distribution of
`DN' values in pixels of each band separately. Since, we deal with DN' values or
brightness values of pixels, the frequency distribution can be brought out by
histograms, i.e., histogram is a graph of DN values against their frequencies. For a
single band of data, the horizontal axis of the histogram shows the range of data file
values (DN' value or gray value or 0-255). The vertical axis shows the number of
pixels that have each data value.
30

Bin functions
Bins are the ordered sets of pixels, where pixels are sorted into a specified
number of bins and then, pixels are given new values based upon the bins to which
they are assigned. Bins are used to group ranges of DN' values together for better
manageability.
Mean
The statistical average is the sum of a set of values divided by the number of
values in the set. For the study area, the statistical average sets are generated and
shown in Fig.2.3.
Median
The central value in a set of data such that an equal number of values are
greater than and less than the median. Same way the study area image was
generated and shown in Fig.2.4.
Mode
The most commonly occurring value in a set of data. In a histogram, it is the
peak of the curve.
Maximum
Using image processing, the highest value in a set of data were generated for
the study area and is shown in Fig.2.5.

31

FOCAL ANALYSIS - MEAN (3X3)

FOCAL ANALYSIS - MEDIAN (3X3)

mlamoil

Fig 2 3

Fig 2 5

Fig 2 4

Fig 2 6

Minimum
The minimum values are processed and generated in a set of data and shown
in Fig.2.6.
Variance
Is a measure of central tendency in a dataset. The average of the squares of
the deviations of a set of observations of a quantity from their mean value is known
as variance. In the same way, an image has been generated and shown in Fig.2.7.
Standard deviation
It is the square root of the variance of a set of values which is a measurement
of the spread of the values. Based on the equation of sample variance, the sample
standard deviation (SQ) for a set of values Q is computed and shown in Fig. 2.8.
Since the variance is the average of the squares of deviations, the wider the
values are spread, the more the variance and more the standard deviation. Standard
deviations are used because the lowest and highest data file values may be much
farther than from the mean.
Parameters
The mean and standard deviations are known as parameters, which are
sufficient to describe a normal curve. Once, the mean and standard deviation are
known, other calculations about the data can be easily done.
Covariance
In a bi-variant distribution, covariance is calculated as the average of the
deviations of DN values of a pixel in two bands with respect to their mean values.
32

Fig 2 7

BRIGHTNESS INVERSION (INVERSE-FLOAT SINGLE)

Fig 2 9

Fig 2 3

BRIGHTNESS INVERSION (REVERSE-FLOAT SINGLE)

Fig 2 10

Covariance measures the tendencies of data file values in the same pixel, but in
different bands to vary with each, in relation to the means of their respective bands.
These bands must be linear.
Covariance matrix
It is an n x n matrix that contains all of the variances and co-variances within
gn' bands of data. Covariance matrix is an organized format for storing variance and
covariance information on a computer system.
Dimensionality of data
It is the spectral dimensionality determined by the number of bands of data
used in image processing, for example, 4 band data is 4 dimensional.
Measurement vector
It is the set of data file values for one pixel in all TV bands
Mean vector
It is the vector of the means of the data file value in each band.
Spectral distance
It is the distance of DN values of a pixel with respect to the mean vector in ndimensional spectral space. It is a number that allows two measurement vectors to
be compared for similarity.
Polynomials
It is a mathematical expression consisting of variables and coefficients. A
coefficient is a constant, which is multiplied by a variable in the expression.
33

Matrix
It is a set of numbers arranged in a rectangular array. If a matrix has 'I' rows
and T columns, it is said to be an lxJ matrix.
2.3 RADIOMETRIC ENHANCEMENT
Radiometric enhancement deals with the individual values of the pixels in the
image. It differs from spatial enhancement, which takes into account the values of
neighbouring pixels.
Depending upon the points and the bands in which they appear, radiometric
enhancement that is applied to one band may not be appropriate for other bands.
Therefore, radiometric enhancement of a multi-band image can usually be
considered as a series of independent, single band enhancement.
Radiometric enhancement usually does not bring out the contrast of every
pixel in an image. Contrast can be lost between some pixels while gained on others.
To explain the above theory, consider a histogram.
2.3.1 Contrast stretching
Image display and recording devices typically operate over a range of 256
gray levels (0-255)-{maximum number represented in a 8-bit coding}. in most of the
raw images, the gray values rarely extend over the full range and hence, exhibit low
contrast. Therefore, it is necessary to stretch the raw input image range to
accentuate the contrast between the features of interest. When radiometric
enhancements are performed on the display devices, the transformation of data file
values into brightness values.

34

Contrast enhancement can be explained, depending upon the functions that


are applied mostly to the data as simple linear stretch, piece-wise linear stretch and
non-linear stretch. For example, a poly-line function is applied in a piece-wise linear
stretch.
Linear contrast stretch
To explain the principles of contrast stretching, we consider a histogram of
brightness values recorded in one spectral band over a scene. The histogram shows
brightness values occurring in limited range of 60-158. If we use the values in the
display device, the levels 0-59 and 159-255 would not be utilised and the total
information would be compressed to a small range. If we expand the range (60158) to the full range (0-255), the range of the image values will be uniformly
expanded to the full range of the output device. This uniform expansion is known as
linear stretch to be displayed in the output, which would be more readily
distinguished. Hence, the linear stretch is applied to each pixel in the image using
the algorithm:
DN(in) MIN
DN (out) = ------------- x 255
MAXMIN
Where,
DN (out) = Digital number assigned to the output image.
DN (in) = Original digital number of the input image.
Max

= Maximum value of the input image (158) to be assigned 255 in the output
image.

Min

= Minimum value of the input image (60) to be assigned 0 in the output


image.

35

Linear contrast stretch is a simple way to improve the visible contrast of an


image. It is often necessary to contrast a raw image data, so that they can be seen
on the display. A two standard deviation linear contrast stretch is automatically
applied in images displayed in the viewer of ERDAS. One drawback in the linear
contrast stretch is that, it assigns as many display levels to the rarely occurring
image values as it does to the frequently occurring values.
Non-linear contrast stretch
A non-linear contrast enhancement can be used for gradually increase or
decrease the contrast over a range, instead of applying the same amount of contrast
(slope) across the entire image. Usually, non-linear enhancement brings out the
contrast in one range while decreasing the contrast in other ranges.
Piecewise linear contrast stretch
A piecewise linear contrast stretch allows enhancement of a specific portion of
data by dividing the look up table into three or more sections. For example; low,
middle and high. It allows to create different straight line segments which can
simulate a curve. We can enhance the contrast/brightness of any section at a time.
This technique is very useful for enhancing image areas in shadow or other areas of
low contrast, both ways inverse and reverse. The same has been shown in Fig. 2.9
& 2.10.
2.3.2 Histogram Equalization
Histogram equalization is a non-linear stretch that redistributes the pixel
values, so that, there are approximately equal number of pixels with each value
within a range. The histogram equalization has been done for the study area and
36

shown in Fig.2.14. The result approximates a flat histogram. Therefore, the contrast
is increased at the peaks (i.e. the most populated range of brightness) and lessened
at the tails. Histogram equalization applies greatest contrast enhancement to the
most populated range of brightness values in the image. It automatically reduces the
contrast in the very high (light) or very low (dark) parts of the image that are
associated with the tails of a normally distributed histogram.
Histogram matching
This is the process of determining a look up table (LUT) that will convert the
histogram of one image to resemble the histogram of another. This is useful for
matching data of the same or adjacent scenes that were scanned on separate days
or are slightly different, because of the sun angle or due to the atmospheric effects.
This is specially used for mosaicing or change detection. To match the histograms, a
LUT is mathematically derived, which serves as a function for converting one
histogram to the other.
2.4 SPATIAL ENHANCEMENT
Spatial enhancement techniques are used to emphasize or de-emphasize
image data of various spatial frequencies. While radiometric enhancements operate
on each pixel individually, spatial enhancement modifies pixel values based on the
neighboring values of the surrounding pixels. They operate on spatial frequencies
which is the difference between the highest and lowest values of a continuous set of
pixels. Jensen (1986) defines spatial frequency as the number of changes in
brightness value per unit distance for any particular part of an image.

37

Fig 2 11

Fig 2 12

NOISE REOJCTON

Fig 2 13

Fig 2 14

1. Zero spatial frequency a flat image, in which every pixel has the same
value.
2. Low spatial frequency an image consisting of a smoothly varying gray
scale.
3. Highest spatial frequency an image consisting of a checkerboard of
black and white pixels.
High frequency image areas are tonally rough, i.e., the grey levels of the
areas change abruptly over a relatively small number of pixels (road, field borders,
etc), whereas low frequency images are tonally smooth images that refer to grey
levels that vary gradually over a larger area.
2.4.1 Filtering in spatial domain
Because of the very nature of spatial frequency that describes the brightness
values over a spatial region, it is necessary to adopt special approach to extract
quantitative spatial information.
The process involved in improving the appearance and the interpretability of
the spatial distribution of data in an image is spatial frequency filtering. It consists of
selectively enhancing the high/low frequency variations of ON in an image. Those,
which emphasize high frequencies and suppress low frequencies are high pass
filters and those that do the reverse are low pass filters. This process is analogous to
electronic filtering in amplifiers to reduce less and rumble, enhance the bass or treble
in sound recording.

38

Convolution Filtering
This is the process of averaging small sets of pixels across an image so as to
change the spatial frequency characteristics of an image.
Low Pass Filtering
A simple low pass filtering may be represented by passing a moving window
through out the input image and creating an output image, whose DN of each pixel
corresponds to the local average within the moving window.
Thus, it smoothens the surface and reduces the gray level range. These are
best utilized to remove random noise and causing them more homogenous. This
simple smoothening will blur the output, especially at the edges and becomes more
accentuated if we use larger size kernels (5x5 or 7x7) and shown in Fig.2.15 & 2.16.
To remove the blurring effect, unequal weighted smooth markers can be used. Using
a low pass filter (3x3 kernel) may make the image two lines and two columns smaller
than the original image. This low pass filtering can be applied especially in scenes
such as thermal plumes and shown in Fig.2.14.
High Pass Filtering
This technique is applied to imagery to remove the slowly varying components
and retain only the high frequency local variation. This filter is computed by
subtracting the output value from the low frequency filter (pixel by pixel) from the
original central pixel value and shown in Fig.2.17, 2.18 & 2.19.
A. If the kernel is applied on a set of pixel in which relatively low value is
surrounded by high values, the low value gets lower

39

CONVOLUTION F.....TER (3X3 KERNEL LOW PA551

Fig 2 15.

CONVOLUTION FILTER (7X7 KERNEL - LOW PASSi

Fig 2 17

CONVOLUTION FILTER I5X5 KERNEL LOW PASSE

Fig 2 115

CONVOLUTION FILTER (3X3 KERNEL HIGH PASS;

Fig 2 13

CONVOLUTION FILTER MS KERNEL - HIGH PASS)

Fig 2 19

CONVOLUTION FILTER (7X7 KERNEL HIGH PASS)

Fig 2 20

INALLIS ADAPTIVE FILTER EIANDINISE


'FLOAT SINGLE)

Fig 2 21

Fig 2 22

B. If the kernel is applied on a set of pixel in which relatively high value is


surrounded by low values, the high value gets higher.
Crisp filtering
Crisp filters sharpen the overall scene luminance without distorting the inter
band variance content of the image. This is a useful enhancement if the image is
blurred due to atmospheric haze, rapid sensor motion or a broad point spread
function of the sensor, for study area, the same has been prepared shown in
Fig.2.20.
2.4.2 Resolution Merge
The resolution of the specific sensor refer to radiometric, spatial or temporal
resolutions. For example, Landsat TM sensors have seven bands with a spatial
resolution of 28.5 m and SPOT panchromatic has one band with very good spatial
resolution of 10 m. Combining these two images to yield a seven-band data set with
10 m resolution would provide the best characteristics of both sensors. However, this
technique is restricted to only three bands (Red, Green, Blue). The resolution merge
function has two different options for re-sampling low spatial resolution data to a
higher spatial resolution while retaining spectral information. They are: forward or
reverse principal component transform and multiplicative.
Adaptive Filtering
There may be many circumstances where image stretching technique is not
the optimum approach. For example, coastal studies where much of the water detail
is spread though a very low DN range and the land details spread through a much
higher DN range. In this case, a filter that adapts the stretch region of interest (the
40

area with in the moving window) would produce a better enhancement. Adaptive
filters attempt to achieve this, and shown in Fig.2.21 & 2.22.
2.4.3 Edge detection/Enhancement
Edge and line detection are important operations in digital image processing.
For example, Geologists are often interested in mapping lineaments which may be
fault lines or bedding structure. Digital filters developed specifically to enhance edge
(linear features) in images fall in two categories directional and non-directional.
High frequency components preserve linear features but do not preserve the low
frequency components. Edge enhancement attempts to preserve both the
processes. Edge enhancement technique can either be linear or non-linear.
Linear edge enhancement
This enhancement technique systematically compares each pixel in an image
to one of its immediately adjacent neighbours and display the difference in terms of
gray level output image. The direction may be vertical, horizontal or diagonal. It is
also possible to convolute the digital data with a matching operator (3x3 or 5x5).
These operators approximate the eight possible pass directions and the same was
shown in Fig. 2.23, 2.24, 2.25, 2.26, 2.27, 2.28.
Laplacian filtering
Edge enhancement without regard to edge direction may be obtained by
applying Laplacian filter kernel and shown in Fig.2.29.

41

CONVOLUTION FILTER (3X3 KERNEL - EDGE DETECT'

Fig 2 23

CONVOLUTION FILTER
OX3 KERNEL EDGE ENHANCE1

Fig 2 25

Fig 2 24

CONVOLUTION FILTER
(5X5 KERNEL - EDGE DETECT)

Fig 2 26

CONVOLUTION FILTER
17X7 KERNEL EDGE DETECT)

CONVOLUTION FILTER
;WO KERNEL EDGE ENHANCE)

111.

7 I%

Fig 2 27

CONVOLUTION FILTER
g)(7 KERNEL EDGE ENHANCE)

Fig 2 29

Fig 2 28

CONVOLUTION FILTER
(3X3 KERNEL- LAPLACIAN EDGE DE TEC TION1

Fig 2 30

Non-linear edge enhancement


A most common type of edge enhancement is a zero filter developed by
Sobel and Pre-witt. For this type of filters, the co-efficient are designed to add up to
zero and shown in Fig. 2.31, Fig.2.30.
2.5 SPECTRAL ENHANCEMENT
Spectral enhancement is used to enhance multi-spectral image. In this
technique, series of arithmetic operations such as addition, subtraction, multiplication
and division are performed to transform the multi-spectral/multi-band image which
will have properties to suit a particular purpose than the original. This technique is
used for a) compression of bands of data that are similar; b) extraction of new bands
of data that are more interpretable to the eye; c) mathematical transformation; and d)
increasing the variety of information in the display using available RGB guns. The
techniques considered for discussion are principal component analysis, decorrelation
stretch, tasseled cap transformation, Colour transformation and Indices/rating.
2.5.1 Principal Components Analysis (PCA)
It involves transformation of raw satellite data into a new image that is often
more interpretable than the original data. It is the process of calculating principal
component bands. It allows redundant data to be compressed into a fewer bands,
thereby reducing the dimensionality of the data. Principal components are that
transects a scatter plot of two or more bands of data, which represent the widest
variability and successfully smaller amounts of variability that are not already
represented. Principal components are perpendicular to one another. In multispectral imagery, adjacent bands are generally correlated. Correlation means that

42

some repetition of data or redundancy of data. PCA compresses the redundant data
and since, PCA data are non-correlated and independent, interpretability is more
than the raw/original data.
PCA has to be attempted if compression of data is expected. The relationship
between pixels representing different cover types become clearer if viewed in the
principal axis system than viewing within the original spectral bands. Data
compression is useful only if more than 3 bands are used as in Landsat TM images,
IRS, and SPOT images. If the basic dimensionality is three, then a multi-band data
can be compressed to 3 principal components and shown in Fig.2.32, 2.33, 2.34.
The information in 7 bands as in Landsat TM can be compressed into three principal
components. These components can be used for generating false colour
composites.
The principal component analysis, involves rotation of axes of the ellipses (2
variable) and ellipsoid (multi-variable) enclosing the scatter plots in spectral space
and changing the coordinates of each pixel in spectral space, i.e., by changing
original DN values along the axes representing the bands. Thereby redistribution of
the brightness values along set of new axes or dimensions take place. The new set
of axes will be parallel to the axes of the ellipse.
First Principal Component
The length and direction of the widest transect of the ellipse are calculated.
The transect, which corresponds to the longest axis of the ellipse is the first principal
component of the data showing Fig. 2.32 and the direction of this first principal
component is the first eigen vector and its length is the first eigen value. The first
principal component is used to define a new axis of the spectral space. When the
43

NON-DIRECTIONAL EDGE (SOBEL)

Fig 2 31

Fig 2 32

PC (FLOAT SINGLE-COMP-2)

Fig 2 33

Fig 2 34

new axis is defined, the points in the scatter plot will have new coordinates
corresponding to the new axis. Since, the points are DN' values of pixels, changing
the coordinates of the points by this process will enable to arrive at new data file
values. The values, are stored in the first principal component band axis of a new
data file. Since, the first principal component shows the direction and length of the
widest transect of the ellipse, in spectral space, this axis measures the highest
variation within the data. The first eigen value will always be greater the ranges of
the input bands.
Successive principal components
The second principal component is the widest transect of the ellipse that is
perpendicular to the first principal component. Second principal component in Fig.
2.33 describes the largest amount of variance in the data values that is not described
by the first P.C.
We have dealt with only two dimensional data. So, first PC axis (Major) covers
about 75 data and 2nd PC axis perpendicular (Minor) covers 10 and more data.
When multi-spectral data is considered is in 'n' dimension, naturally 'n' principal
components are expected. Each successive PC will be perpendicular to the previous
components and the next widest transect of the ellipse in 'n' dimensional space of
scatter plot accounts for a decreasing amount of variation of data which is not
accounted for, by previous components.
When we deal with 3 or more bands of data in Landsat TM or IRS,
corresponding to the number of bands used, we get that many number of principal
components (TM=7, IRS=4); but actually, first few bands account for a high
proportion of the variance in the data event in 98-100% in some cases. So, PCA is
44

good for data compression into fewer bands. By experience, we have found out that
after 3rd in Fig. 2.34 or 4th P.C., there will be excessive smoothening of data and the
interpretability is greatly reduced. In certain context, some cover type of interest
otherwise obscured by the poor contrast with the neighboring / adjacent cover types
can be enhanced by resorting to higher order of principal components.
2.5.2 Decorrelation Stretch
The linear contrast stretch is performed to enhance the image by altering the
distribution of DN values of the image within the 0-255 range and to utilize the full
range of the values in a linear fashion. Contrast stretching methods are image
specific i.e. it may be good on one image but may be disappointing on another
image. So a new method has been developed known as multispectral histogram
normalization contrast enhancement aimed at the simultaneous enhancement of the
red, green and blue components of a false colour composite image by making use of
the full range of colours in the RGB colour cube.
The decorrelation stretch stretches the principal components of an image, not
to the original image. Depending on the DN' ranges and the variance of individual
input bands, these new images (PCs) will occupy only a portion of the 0-255 data
range. The effect of de-correlation and stretch is to spread the pixel values more
evenly through the RGB and shown in Fig. 2.35. Each 'PC' is separately stretched to
fully utilize the data range. The new stretched 'PC' composite image is then
transformed to the original data areas.

45

2.5.3 Tasseled Cap Transformation


The tasseled cap transformation is intended to define a new co-ordinate
system by which, soil line, vegetation and rocks are more clearly represented. The
axes of this new coordinate system are termed "brightness", "green-ness", "yellowness" and "none such". Of these, the last two are not in vogue. Brightness axis is
associated with variations in the soil or rock background reflectance. The greenness
axis is correlated with variation in the green vegetation. This method of spectral
enhancement was developed primarily for monitoring agricultural growth of crops.
We have found out that it can enhance even different sedimentary bands and even
important geological structures such as folds, faults, domes and also geomorphic
elements such as valley fills, tidal flats, palaeochannel, gullies, rills, ridges and even
terrances. Even in identifying different rock units it can be used to some extent.
The tasseled cap transformation offers a way to study vegetation data in a 3
data structure axes. These axes are: 1. Brightness good for geology, 2. Greenness
good for vegetation and enhancement of soil, 3. Wetness moisture studies.
Tasseled cap transformation involves rotation of the axes, which is sensordependent.
2.5.4 Hue, Saturation and Intensity Transformation
The primary colours red, green and blue make the colour guns of a colour
display unit. These additive colours are used in viewing the 3 band image of a multiband data set. However, an alternate approach to colour is the Intensity, Hue and
Saturation System (HIS). This system presents colours as nearly as the human
being can perceive. The INS system is based on the colour sphere in which the
vertical axis represents intensity, the radius represents saturation and the
46

circumference represents Hue. The intensity (I) axis represents brightness variations
and ranges from 0-255, Hue (H) represents dominant wave length of colour. Hue
values commence with 0 at the midpoint of red tones and increase anticlock-wise
around the circumference of the sphere to conclude with 255 adjacent to 0.
Saturation (S) represents the purity of colour, and ranges from 0 at the center of the
colour sphere to 255 at the circumference. '0' represents impure colour in which all
wavelengths are equally represented and the eye will perceive a shade of Grey from
white to black.
Intermediate values of saturation represent pastel shades, whereas high
values represent purer and intense colours. Any range of values can be used (even
0-1). Combination of 3 band data of any sensor system will result in colour images
lacking in saturation, even though the bands are contrast stretched. A pastel
appearance may come especially on the Landsat images. This under saturation is
due to high degree of correlation between spectral bands. In order to enhance
saturation following steps have to be performed.
1. Transform any 3 bands of data from the RGB system into the HIS system so
as to generate 3 component image representing, intensity, hue and
saturation. This transformation is achieved by the following equations:
I = R+G+B
H= G-B
S = I-3B
2. Apply a linear contrast stretch to the original saturation image and shown in
Figure. 2.36, 2.37, 2.38, 2.39, 2.40, 2.41, 2.42, 2.43, 2.44, 2.45, 2.46, 2.47.
3. Transform the intensity, hue and enhanced saturation images from the IHS
system back into three images of the RGB system and shown in Figure 2.36,

47

DECORRELATION STRETCH /FLOAT SINGLE1

Fig 2 35

Fig 2 37

Fig 2 36

Fig 2 36

IHS-RG[31132-STRETCH MS)

Fig 2 39

Fig 2 40

IHS-RG13 (213-NON STRETCH)

Fig 2 41

Fig 2 42

IHS-12013 (231-NON STRETCH)

Fig 2 43

Fig 2 44

IHS-RGB I312-STRETCH (&S)

Fig 2 45

Fig 2 413

2.37, 2.38, 2.39, 2.40, 2.41, 2.42, 2.43, 2.44, 2.45, 2.46, 2.47. Use these
enhanced RGB images to prepare the new colour composite image. Colour
tones (hue) are enhanced to a wider range, thus improving the discrimination
between colour.
The INS transformation and its inverse are useful for combining images of
different types and also very useful in geological applications and mapping of arid
geomorphic features and even delineating the rock types and structures such as
faults and folds. HIS to RGB transformation is intended as a complement to the
standard RGB to HIS transform. In this, a minimum and maximum stretch is applied
to either intensity, saturation or both, so that, they fully utilize the 0-1 value range.
After stretching, HIS image is retransformed back to the original RGB space resulting
in an image almost like the input image. This is more like colour coding of the data.
2.5.5 Indices
Indices or ratioing is the process of dividing the pixels in one image by the
corresponding pixels in a second image. This is commonly attempted on the images
due to two reasons. Some aspects hidden in the shape of spectral reflectance curve
pertaining to different cover types can be brought out. The second reason is that the
undesirable effects on the radiance (recorded) due to topographic variation resulting
in variable illumination can be reduced.
The two properties of band ratioing or indices, such as reduction of
topographic effect and correlation of shapes of spectral reflections curve between 2
bands has led to extensive use of ratioing in exploration geology. Even visual
discrimination between altered rock types can be made better by using 3 band ratios
resulting in a false colour composite. It is being used extensively in vegetation
48

studies in Figure 2.49, 2.51, 2.53. Number of ratios are only limited to the
imagination of the user. In a simple ratio, value ranges from zero to a positive
marker.
Different types of indices are used and are shown in Fig. 2.48, 2.49, 2.50,
2.51, 2.52, 2.53.
1. (Band X Band Y) Simple
2. Band x Band Y
- Complex
Band X + Band Y
3. Band X
- Common
Band Y
2.6 Synthesis
In the present study, the various image processing techniques were adopted
such as image rectification, enhancement and classification, statistical analyses,
radiometric enhancement techniques, spatial and spectral enhancement techniques.
Statistical analysis is to improve the visual interpretability of the image and to
interprete various terrain parameters of the study area.

49

Fig 2 61

Fig 2 52

INDICES (UNSIGNED 8 BIT-SORT-IR-R)

Fig 2 53

Fig 2 54

S-RGE' 021 -NON STRETCH:

IHS-RGB4321-STRETCH ILLS)

Fig 2 47

Fig 2 43

Fig 2 49

Fig 2 50

S-ar putea să vă placă și