Documente Academic
Documente Profesional
Documente Cultură
There are several techniques of image enhancement that may be used for remote
sensing data. These may be categorized as (i) point operations (radiometric
enhancement), and (ii) local operations (spatial enhancement). Point operations
involve modification of the brightness value of each pixel in an image data
independently. Unlike the pixel-by-pixel operation used for radiometric
enhancement, techniques used for spatial enhancement are applied over
neighborhoods of pixels. Although the procedure still focuses on modification of
DN value of an image pixel, the new value is derived based on the DN value of its
surrounding pixels. Such a spatial interrelationship of pixel values leads to variations
in the geometric detail of the perceived image.
Image enhancement operations are generally applied to the image data, after
necessary restoration procedures have been performed. For instance, removal of
noise from image data in particular is an important pre-requisite to most image
enhancements. In case, image enhancement is carried out without removing the
noise from digital data, there are possibilities that noise in the image is enhanced.
2. Contrast Stretch
y = f(x)
where x is the brightness value of a pixel in the raw image and y is the new
brightness value of the same pixel after the contrast stretch. Linear contrast
enhancement can simply be described as:
Minimum-Maximum Stretch
Paper: Remote Sensing and GIS
GEOLOGY Module: Image Enhancement: single and multi-band
images, contrast stretching and filtering
Linear stretch is applied to each pixel in the image-using algorithm as
shown below:
DN orig DN min
DN out Lmax Lmin
DN max DN min
Where,
DNout = DN value allocated to the pixel in the stretched image
DNorig = DN value of the pixel in the original image
DNmin = min. DN value in the original image
DNmax = max. DN value in the original image
Lmin = min. brightness value allocated to the pixel in the stretched
image
Lmax = max. brightness value allocated to the pixel in the stretched
image
Minimum and maximum value i.e. Lmin and Lmax, allocated to pixels in
the output image, are generally assigned as 0 and 255 respectively. The
above equation, may therefore, be written as follows:
DN orig DN min
DN out 255
DN max DN min
New DN values so obtained are usually rounded off to the nearest
integer if the output is obtained in real form.
In general, only few pixels possess the two extreme values and
therefore there would be only few pixels that would appear at the two
extreme ends of the histogram. However, these pixels would occupy a
reasonable amount of brightness values.
Therefore, sometimes the two ends of the histogram are trimmed and
the remainder of portion of the histogram is enhanced more
prominently. The pixels lying within the trimmed portion of the
histogram are assigned the DN values as 0 or 255. A value of 0 is
assigned to pixels that have the original DN value less that the defined
DNmin. Similarly, pixels having original DN value greater than the
defined DNmax are assigned a value of 255.
In this method, when input and output DN values are plotted against
each other, a curve similar to the sinusoidal curve is obtained.
Therefore, this method is termed as sinusoidal stretch method.
Sinusoidal Stretch
Density Slicing
Instead of using grey levels in density slicing, different colors may also
be used. This process is known as color density slicing. A suitable
selection of colors used for display may allow the analyst to figure out
very fine details in the image.
3. Convolution
This type of kernel may be used to prepare a low frequency enhancement of the
image.
65 62 62
61 64 62
4. Image Filtering
One of the important feature of remotely sensed data is a parameter known as spatial
frequency. It is defined as number of changes in pixel value per unit distance in an
image. In case, there are very little changes occurring in the brightness value of pixel
in a portion of a remote sensing image, then it is termed as a low frequency area. On
the other hand, if considerable changes in the Brightness Value are observed over a
small distance, then it is said to be an area of high frequency.
In the image filtering technique, various algorithms are used to enhance patterns in
imagery based on spatial frequency within small regions of an image. The algorithm
is usually designed to identify large differences in brightness value, or more subtle
differences that occur in a specific orientation.
Thus, image filtering is a technique in which an image is divided into its constituent
spatial frequencies. It also involves changing certain spatial frequencies, to
emphasize specific features in a remote sensing image. Following are three different
types of spatial filters that are commonly used in the processing of remote sensing
data:
Low pass filters
High pass filters
Edge Enhancement
Use of a 3x3 kernel causes the output image to be shorter by two lines and
two columns as compared to input image thus resulting in loss of some data
near image boundaries. This loss of data may be avoided by repeating the
brightness value of original border pixels i.e. artificial extension of the output
image beyond its border. Alternatively, the brightness values at the borders
of the output image may be replicated, depending on the image behavior
within pixels of the border. Mean, median and mode filters are the forms of
commonly used low pass filters. Average filtering may be employed using a
kernel of uniform weights. Following is an example of a 3x3 kernel of equal
weights.
1 1 1
1 1 1
1 1 1
Sometimes, weighted low pass filters, as shown below, may also be used.
1 1 1
0 0 0
1 1 1
1 2 1
2 4 2
1 2 1
1 1 1
1 8 1
1 1 1
Prewitt Kernels and Sobel Kernels, named after their respective developers,
are commonly used for edge enhancement. Examples of these kernels are
given below-
1 1 1
x-direction = 0 0 0
1 1 1
1 0 1
y -direction = 1 0 1
1 0 1
Sobel Kernels:
1 2 1
x-direction = 0 0 0
1 2 1
1 0 1
y -direction = 2 0 2
1 0 1
5. Summary
Using a 33 averaging kernel with all of its coefficients being equal to 1/9,
determine the modified value of the central pixel in the in the above matrix.
Q6. In the above problem, determine the modified value of the central pixel using
the kernel as given below:
1 1 1
0 0 0
1 1 1
Q8. What do you understand by edge detection? Give a suitable example of kernel
that may be used in edge detection?
1 2 1
2 4 2
1 2 1
1. Bhatta, B. (2008). Remote sensing and GIS. Oxford University Press, USA.
ISBN: 019569239X, 9780195692396.
2. Jensen, John R. (2015), Introductiory to Digital Image Processing : A
Remote Sensing Perspective, 4th Edn., Pearson Education. ISBN:
013405816X, 978-0134058160.
3. Lillesand Thomas, Keifer Ralph W. and Chipman Jonathan (2015). Remote
sensing and Image Interpretation, 7th Edn. John Wiley & Sons, New York.
ISBN : 978-1-118-34328-9.
4. Liu, J. G., & Mason, P. J. (2013). Essential image processing and GIS for
remote sensing. John Wiley & Sons. ISBN: 978-0-470-51031-5.
5. Congalton, R. G., & Green, K. (2008). Assessing the Accuracy of Remotely
Sensed Data: Principles and Practices. CRC press Inc., USA. ISBN:
9781420055122.
6. Reddy, Anji M. (2012), Textbook of remote sensing and geographical
information systems, 4th Edn., B S Publications. ISBN: 9381075972, 978-
9381075975.