Sunteți pe pagina 1din 27

CHAPTER 1

INTRODUCTION

Pictures are the most common and convenient means of conveying or transmitting

information. A picture is worth a thousand words. Pictures concisely convey information about

positions, sizes and inter-relationships between objects. They portray spatial information that we

can recognize as objects. Human beings are good at deriving information from such images,

because of our innate visual and mental abilities. About 75% of the information received by

human is in pictorial form [1].

DIGITAL IMAGE

A digital remotely sensed image is typically composed of picture elements (pixels)

located at the intersection of each row i and column j in each K bands of imagery. Associated

with each pixel is a number known as Digital Number (DN) or Brightness Value (BV) that

depicts the average radiance of a relatively small area within a scene (Fig. 1).

A smaller number indicates low average radiance from the area and the high number is an

indicator of high radiant properties of the area. The size of this area effects the reproduction of

details within the scene. As pixel size is reduced more scene detail is presented in digital

representation.

The figure 1 has shown below which shows the structure of digital image and

multispectral image. The structure contains scan lines and pixels but digital images have origin

values and multispectral image have bands.


1.1 What Is Digital Image Processing?

An image may be defined as a two-dimensional function, where x and y are spatial

(plane) coordinates, and the amplitude of f at any pair of coordinates (x, y) is called the intensity

or gray level of the image at that point. When x, y, and the intensity values of f are all finite,

discrete quantities, we call the image a digital image. The field of digital image processing refers

to processing digital images by means of a digital computer. Note that a digital image is

composed of a finite number of elements, each of which has a particular location and value.

These elements are called picture elements, image elements, pels, and pixels. Pixel is the term

used most widely to denote the elements of a digital image.

The processes of acquiring an image of the area containing the text, preprocessing that

image, extracting (segmenting) the individual characters, describing the characters in a form

suitable for computer processing, and recognizing those individual characters are in the scope of

what we call digital image processing in this book.


Making sense of the content of the page may be viewed as being in the domain of image

analysis and even computer vision, depending on the level of complexity implied by the

statement ―making sense.‖ As will become evident shortly, digital image processing, as we have

defined it, is used successfully in a broad range of areas of exceptional social and economic

value.

The goal of digital image enhancement is to produce a processed image that is suitable

for a given application. For example, we might require an image that is easily inspected by a

human observer or an image that can be analyzed and interpreted by a computer [2]. There are

two distinct strategies to achieve this goal.

First, the image can be displayed appropriately so that the conveyed information is

maximized. Hopefully, this will help a human (or computer) extract the desired information.

Second, the image can be processed so that the informative part of the data is retained

and the rest discarded. This requires a definition of the informative part, and it makes an

enhancement technique application specific.

Image enhancement algorithms can be classified in terms of two properties. An algorithm

utilizes either point or spatial processing, and it incorporates either linear or nonlinear operations.

Color images carry a vast amount of information with them. But this information is

somewhat hidden, so human eyes tend to fail in analyzing them. Most importantly, small

changes in characteristics of information such as intensity, color, texture etc are really difficult to

get realized. So, we need an efficient color image segmentation technique to analyze them. But

the result of any color image segmentation technique totally depends on the quality of the image

concerned.
Especially, in the case of the satellite image, image quality is degraded because of noises

that generally involved during capturing, transmission and acquisition process of the image. So,

segmenting such noisy images does not produce an effective analysis result. Hence, we need

some preprocessing techniques to remove artifacts, outliers or we can say noises from the images

before going for further analysis stage [3].

Image enhancement is such a preprocessing technique where our goal is to suppress the

noise while preserving the integrity of edges and the other detailed information [4][5]. Actually,

noises can be removed completely only when the real causes of their formation are studied and

investigated. But in real fact, we cannot completely investigate them. So, the only thing we can

do is to introduce some mathematical equation based techniques to partially remove the noises as

much as possible [6].

Color image enhancement techniques involve more efforts than gray image enhancement

techniques due to the following two reasons:

(1) In the case of color images, we need to consider vectors instead of scalars.

(2) Also, for color images, the complexity of image perception is again a considerable

fact.

Over past few years, satellite remote sensing data have played an important role in

different scientific and need based applications in the field of agriculture, geology, forestry,

biodiversity conservation, regional planning, education and warfare etc. Multispectral satellite

data (e.g. Landsat TM) combined with high resolution data (e.g. aerial photographs, SPOT

satellite panchromatic data) reveal the surface geology in arid areas where the vegetation cover

can be neglected and the landscape is dominated by extensive outcrops of different rock types.
In contrast to the classical, time consuming geological field work with its expensive and

complex logistics, remote sensing techniques offer an efficient and low cost addition to

preliminary geological investigations [7].

Satellite image enhancements are used to make it easier for visual interpretation and

understanding of imagery. The advantage of digital imagery is that it allows us to manipulate the

digital pixel values in an image. Although radiometric corrections for illumination, atmospheric

influences, and sensor characteristics may be done prior to distribution of data to the user, the

image may still not be optimized for visual interpretation. Remote sensing devices, particularly

those operated from satellite platforms, must be designed to cope with levels of

target/background energy which are typical of all conditions likely to be encountered in routine

use. With large variations in spectral response from a diverse range of targets (e.g. forest, deserts,

snowfields, water, etc.), no generic radiometric correction could optimally account for and

display the optimum brightness range and contrast for all targets. Thus, for each application and

each image, a custom adjustment of the range and distribution of brightness values is usually

necessary.

Image processing is the processing for which the input is an image, such as a photo-graph

or video frame the output we get after processing the image may be either an image or

parameters related to the image. Image processing is used in many applications like Remote

Sensing, Medical Application etc. There are many types of images like panchromatic (PAN),

multispectral (MS), hyper spectral (HS), synthetic aperture radar (SAR) etc. Covering different

parts of electromagnetic spectrum are capture by different earth observation satellites. The

Satellite images have issues with their resolution, so the images which loses their high frequency

contents. And they appeared as blurred image.


Also many issues are related with the satellite images. Therefore the enhancement of the

image is necessary to improve the visibility of the image to remove unwanted noise, artifacts, to

improve contrast and to find more details. So that the some useful information is extracted to get

enhance image. This is important reason behind image enhancement methods. [8,9]

Satellite image processing plays a vital role for research developments in Astronomy,

Remote Sensing, GIS, Agriculture Monitoring, Disaster Management and many other fields of

study. However, processing those satellite images requires a large amount of computation time

due to its complex and large processing criteria. This seems a barrier for real time decision

making. To switch the job faster, distributed computing can be a suitable solution. Recently,

Cluster and Grid are two most familiar and powerful distributed systems to serve for high

performance parallel applications [10].

The picture is an important role in this life. Without pictures, we cannot recognize

anything that exists in our environment. Images are objects that can have many concepts, colors,

details, even the information contained therein. Even so the picture is not completely perfect, but

still, has a disability that not everyone can understand it. Like less detail, less lighting, noise in

the picture, so we cannot understand what the intent of the picture. Therefore, image

improvement is necessary, especially at this time, the image is a very important object in the

field of geographical, industry, medical, and as entertainment.

Digital image processing is a field that is generally used by experiments on a large scale.

Digital image processing has many algorithms and methods for the process. Implementation of

the algorithm can be done on the input image to be processed further. With this digital image

processing, an image with bad information would be great if used in digital image enhancement.
Through this digital image enhancement, the quality of the digital image will be better

than the previous image. The basic idea of this image enhancement is to change the contrast and

detail better. This image enhancement has many methods, ranging from filtering methods,

histogram methods, methods with multiple algorithms to even the incorporation of several

methods to produce excellent image repairs. To use the image enhancement, we must understand

what is contained in the image or the problem in terms of what underlies us to use the image

repair method, whether from detail, color, lighting, and others. Because in image enhancement,

not all methods will produce a good image after being processed. Sometimes we initially want to

improve the image but instead aggravate the image. With this, we must estimate what image

improvement method is needed [11].

With the current technological development of image enhancement can be done easily

according to our own desires. For example, if we want to improve our pictures or photos, we can

fix them with the app on your laptop or Smartphone without having to fiddle with the inside of

the image. Apart from that, the image repair app was originally a collection of some of the image

repair methods implemented in the app.

History of Digital Image Processing

Early 1920s: One of the first applications of digital imaging was in the newspaper industry.

- The Bartlane cable picture transmission service

- Images were transferred by submarine cable between London and New York

- Pictures were coded for cable transfer and reconstructed at the receiving end
on a telegraph printer
Mid to late 1920s: Improvements to the Bartlane system resulted in higher quality images

- New reproduction processes based on photographic techniques

- Increased number of tones in reproduced images


1960s: Improvements in computing technology and the onset of the space race led to a surge of

work in digital image processing

- 1964: Computers used to improve the quality of images of the moon taken by the

Ranger 7 probe

- Such techniques were used in other space missions including the Apollo landings

1970s: Digital image processing begins to be used in medical applications

1979: Sir Godfrey N. Hounsfield & Prof. Allan M. Cormack share the Nobel Prize in medicine

for the invention of tomography, the technology behind Computerized Axial Tomography (CAT)

scans

1980s - Today: The use of digital image processing techniques has exploded and they are now

used for all kinds of tasks in all kinds of areas

– Image enhancement/restoration

– Artistic effects

– Medical visualization

– Industrial inspection

– Law enforcement

– Human computer interfaces


Key Stages in Digital Image Processing

Figure 2 Key stages of digital image processing

IMAGE ACQUISITION

It could be as simple as being given an image which is in digital form. The main work involves:

a) Scaling

b) Color conversion (RGB to Gray or vice-versa)

IMAGE ENHANCEMENT

It is amongst the simplest and most appealing in areas of Image Processing it is also used

to extract some hidden details from an image and is subjective.

IMAGE RESTORATION

It also deals with appealing of an image but it is objective (Restoration is based on

mathematical or probabilistic model or image degradation).


COLOR IMAGE PROCESSING

It deals with pseudo color and full color image processing color models are applicable to

digital image processing.

IMAGE COMPRESSION

It involves in developing some functions to perform this operation. It mainly deals with

image size or resolution.

MORPHOLOGICAL PROCESSING

It deals with tools for extracting image components that are useful in the representation &

description of shape.

SEGMENTATION PROCEDURE

It includes partitioning an image into its constituent parts or objects. Autonomous

segmentation is the most difficult task in Image Processing.

REPRESENTATION & DESCRIPTION

It follows output of segmentation stage, choosing a representation is only the part of

solution for transforming raw data into processed data.

OBJECT DETECTION AND RECOGNITION

It is a process that assigns a label to an object based on its descriptor.

1.2 DIGITAL IMAGE PROCESSING STAGES:

Digital Image Processing is largely concerned with four basic operations: image

restoration, image enhancement, image classification, image transformation.


Image restoration is concerned with the correction and calibration of images in order to

achieve as faithful a representation of the earth surface as possible—a fundamental consideration

for all applications.

Image enhancement is predominantly concerned with the modification of images to

optimize their appearance to the visual system. Visual analysis is a key element, even in digital

image processing, and the effects of these techniques can be dramatic.

Image classification refers to the computer-assisted interpretation of images—an

operation that is vital to GIS.

Finally, image transformation refers to the derivation of new imagery as a result of some

mathematical treatment of the raw image bands.

Image analysis systems can be categorized into following:

(i) Pre-processing

(ii) Image Enhancement

(iii) Image Transformation

(iv) Image Classification and Analysis

Image
Pre-processing Image Image Classification and
Enhancement Transformation Analysis

Figure 3 Steps for Image analysis


1.2.1 Pre-processing

Pre-processing functions involves the operations required prior to the main data analysis

and consists of processes aimed at geometric correction, radiometric correction and atmospheric

corrections to improve the ability to interpret the image components qualitatively and

quantitatively. These processes correct the data for sensor irregularities and removing

(radiometric corrections) unwanted sensor distortion or atmospheric noise.

In image preprocessing, image data recorded by sensors on a satellite restrain errors

related to geometry and brightness values of the pixels. These errors are corrected using

appropriate mathematical models which are either definite or statistical models.

Image enhancement is the modification of image by changing the pixel brightness values

to improve its visual impact. Image enhancement involves a collection of techniques that are

used to improve the visual appearance of an image, or to convert the image to a form which is

better suited for human or machine interpretation.

Sometimes images obtained from satellites and conventional and digital cameras lack in

contrast and brightness because of the limitations of imaging sub systems and illumination

conditions while capturing image. Images may have different types of noise. In image

enhancement, the goal is to accentuate certain image features for subsequent analysis or for

image display [15]. Examples include contrast and edge enhancement, pseudo-coloring, noise

filtering, sharpening, and magnifying. Image enhancement is useful in feature extraction, image

analysis and an image display. The enhancement process itself does not increase the inherent

information content in the data. It simply emphasizes certain specified image characteristics.

Enhancement algorithms are generally interactive and application dependent.


Some of the enhancement techniques are:

a. Contrast Stretching

b. Noise Filtering

c. Histogram modification

Contrast Stretching

Some images (eg. over water bodies, deserts, dense forests, snow, clouds and under hazy

conditions over heterogeneous regions) are homogeneous i.e., they do not have much change in

their levels. In terms of histogram representation, they are characterized as the occurrence of

very narrow peaks.

The homogeneity can also be due to the incorrect illumination of the scene [16].

Ultimately the images hence obtained are not easily interpretable due to poor human

perceptibility. This is because there exists only a narrow range of gray-levels in the image having

provision for wider range of gray-levels. The contrast stretching methods are designed

exclusively for frequently encountered situations. Different stretching techniques have been

developed to stretch the narrow range to the whole of the available dynamic range.

Noise Filtering

Noise Filtering is used to filter the unnecessary information from an image. It is also used

to remove various types of noises from the images. Mostly this feature is interactive. Various

filters like low pass, high pass, mean, median etc., are available [16].

Histogram Modification

Histogram has a lot of importance in image enhancement. It reflects the characteristics of

image. By modifying the histogram, image characteristics can be modified. One such example is

Histogram Equalization.
Histogram equalization is a nonlinear stretch that redistributes pixel values so that there is

approximately the same number of pixels with each value within a range. The result

approximates a flat histogram. Therefore, contrast is increased at the peaks and lessened at the

tails [16].

Image Segmentation

Segmentation is one of the key problems in image processing. Image segmentation is the

process that subdivides an image into its constituent parts or objects. The level to which this

subdivision is carried out depends on the problem being solved, i.e., the segmentation should

stop when the objects of interest in an application have been isolated e.g., in autonomous air-to-

ground target acquisition, suppose our interest lies in identifying vehicles on a road, the first step

is to segment the road from the image and then to segment the contents of the road down to

potential vehicles. Image thresholding techniques are used for image segmentation.

After thresholding a binary image is formed where all object pixels have one gray level

and all background pixels have another - generally the object pixels are 'black' and the

background is 'white'. The best threshold is the one that selects all the object pixels and maps

them to 'black'. Various approaches for the automatic selection of the threshold have been

proposed. Thresholding can be defined as mapping of the gray scale into the binary set {0, 1} :

0, 𝑖𝑓 𝑔 𝑥, 𝑦 < 𝜏 (𝑥, 𝑦)
𝑠 𝑥, 𝑦 =
1, 𝑖𝑓 𝑔(𝑥, 𝑦) ≥ 𝜏 (𝑥, 𝑦)

Where, S(x, y) is the value of the segmented image, g(x, y) is the gray level of the pixel (x, y)

and T(x, y) is the threshold value at the coordinates (x, y). In the simplest case T(x, y) is

coordinate independent and a constant for the whole image. It can be selected, for instance, on

the basis of the gray level histogram.


When the histogram has two pronounced maxima, which reflect gray levels of object(s) and

background, it is possible to select a single threshold for the entire image.

A method which is based on this idea and uses a correlation criterion to select the best

threshold is described below. Sometimes gray level histograms have only one maximum. This

can be caused, e.g., by inhomogeneous illumination of various regions of the image. In such case

it is impossible to select a single thresholding value for the entire image and a local binarization

technique must be applied. General methods to solve the problem of binarization of in

homogeneously illuminated images, however, are not available. Segmentation of images

involves sometimes not only the discrimination between objects and the background, but also

separation between different regions. One method for such separation is known as watershed

segmentation.

1.2.2 Image Enhancement

In order to aid visual interpretation, visual appearance of the objects in the image can be

improved by image enhancement techniques such as grey level stretching to improve the contrast

and spatial filtering for enhancing the edges. To make image easier for visual interpretation

Enhancements are used. The advantage of digital imagery is that it allows us to manipulate the

digital pixel values in an image. Although radiometric corrections for illumination, atmospheric

influences, and sensor characteristics may be done prior to distribution of data to the user, the

image may still not be optimized for visual interpretation. Image Enhancement methods are of

four types

(i) Radiometric Enhancement

(ii) Spatial Enhancement

(iii) Spectral Enhancement


(iv) Geometric Enhancement

1.2.3 Image Transformation

The choice of a particular transform in a given application depends on the amount of

reconstruction error that can be tolerated and the computational resources available.

Compression is achieved during the quantization of the transformed coefficients not

during the transformation step. Image modeling or transformation is aimed at the exploitation of

statistical characteristics of the image (i.e. high correlation, redundancy) [13]. Some transform

techniques are:

 Fourier Transform (FFT, DFT, WFT)

 Discrete Cosine Transform (DCT)

 Walsh-Hadamand Transform (WHT)

 Wavelet Transform (CWT, DWT, FWT)

1.2.4 Image Classification and Analysis

The simulation results showed that the proposed algorithm performs better with the total

transmission energy metric than the maximum number of hops metric. The proposed algorithm

provides energy efficient path for data transmission and maximizes the lifetime of entire

network. As the performance of the proposed algorithm is analyzed between two metrics in

future with some modifications in design considerations the performance of the proposed

algorithm can be compared with other energy efficient algorithm. We have used very small

network of 5 nodes, as number of nodes increases the complexity will increase. We can increase

the number of nodes and analyze the performance.


Image classification is the labeling of a pixel or a group of pixels based on its grey value

[5]. Classification is one of the most often used methods of information extraction. In

Classification, usually multiple features are used for a set of pixels i.e., many images of a

particular object are needed.

In Remote Sensing area, this procedure assumes that the imagery of a specific geographic

area is collected in multiple regions of the electromagnetic spectrum and is in good registration.

Most of the information extraction techniques rely on analysis of the spectral reflectance

properties of such imagery and employ special algorithms designed to perform various types of

'spectral analysis'. The process of multispectral classification can be performed using either of

the two methods: Supervised or Unsupervised [16].

Classification of remotely sensed data is used to assign corresponding levels with respect

to groups with homogenous characteristics, with the aim of discriminating multiple objects from

each other within the image. The level is called class. Classification will be executed on the base

spectral or spectrally defined features such as density, texture, etc., in the feature space. It can be

said that classification divides the feature space into several classes based on a decision rule. In

many cases, classification will be undertaken using a computer with the use of mathematical

classification techniques [14]. The following methods are considered to determine a decision rule

for classification:

1. Supervised Classification: In order to determine a decision rule for classification, it is

necessary to know the spectral characteristics or features with respect to the population of each

class. The spectral features can be measured using ground-based spectrometers. However due to

atmospheric effects, direct uses of spectral features measured on the ground are not always

available. For this reason, sampling of training data from clearly identified training areas,
corresponding to defined classes is usually made for estimating the population statistics. This is

called supervised classification. Statistically unbiased sampling of training data should be made

in order to represent the population correctly.

2. Unsupervised Classification: In the case where there is less information in an area to

be classified, only the image characteristics are used as follows.

 Multiple groups, from randomly sampled data, will be mechanically divided into

homogeneous spectral classes using a clustering technique.

 The clustered classes are then used for estimating the population statistics. This

classification technique is called unsupervised classification.

1.3 SATELLITE IMAGE PROCESSING

Satellite images are rich and play a vital role in providing geographical information.

Satellite and remote sensing images provides quantitative and qualitative information that

reduces complexity of field work and study time. Satellite remote sensing technologies collects

data/images at regular intervals. The volumes of data receive at datacenters is huge and it is

growing exponentially as the technology is growing at rapid speed as timely and data volumes

have been growing at an exponential rate. There is a strong need of effective and efficient

mechanisms to extract and interpret valuable information from massive satellite images. Satellite

image classification is a powerful technique to extract information from huge number of satellite

images.

Satellite image processing is a technique to enhance raw images received from cameras

or sensors placed on satellites, space probes and aircrafts or pictures taken in normal day to day

life in various applications. The process of creating thematic maps as spatial distribution of

particular information.
These are structured by Spectral Bands. These have constant density and when they

overlap their densities get added. It performs image analysis on multiple scale images and

catches the comprehensive information of system for different application. Examples of themes

are soil, vegetation, water-depth and air. The supervising of such critical events requires a huge

volume of surveillance data and extremely powerful real time processing for infrastructure.

A process by which an image is geometrically correcting is called Rectification. So, it is

the process by which geometry of an image is made planimetric. Rectification is not necessary if

there is no distortion in the image. For example, if an image file is produced by scanning or

digitizing a paper map that is in the desired projection system. Then that image is already planar

and does not require rectification. Scanning and digitizing produce images that are planar. But it

does not contain any map coordinate information. These images need only to be geo-referenced,

which is a much simpler process than rectification. [12] Ground Control Points (GCP) are the

specific pixels in the input image.

For which the output map coordinates are known. To solve the transformation equations a

least squares solution may be found that minimizes the sum of the squares of the errors. When

selecting ground control points as their number, quality and distribution affect the result of the

rectification. The mapping transformation has been determined a procedure called resampling.

Resampling matches the coordinates of image pixels to their real world coordinates. And then

writes a new image on a pixel by pixel basis. Since the grid of pixels in the source image rarely

matches the grid for the reference image. Using resampling method the output grid pixel values

are calculated. The image processing techniques can be categories in to four main processing

stages: Image preprocessing, Enhancement, Transformation and Classification.


Satellite image classification is a process of grouping pixels into meaningful classes. It is

a multi-step workflow. Satellite image classification can also be referred as extracting

information from satellite images. Satellite image classification is not complex, but the analyst

has to take many decisions and choices in satellite image classification process. Satellite image

classification involves in interpretation of remote sensing images, spatial data mining, studying

various vegetation types such as agriculture and foresters etc. and studying urban and to

determine various land uses in an area. The current research work is a literature review on

satellite image classification methods and techniques. It describes and provides details on various

satellite image classification methods to the analyst. The current literature review emphasis on

automated satellite image classification methods and techniques.

In remote sensing images, lot of predictions can be made without any intervention of the

human being. Remotely sensed image is digital representations of the Earth, by using this, places

which cannot be accessed is viewed by the remote sensing images, this will encourage the

process of those interior parts. In a remotely sensed image data, each pixel represents an area of

the Earth at a specific location. If a pixel satisfies a certain set of criteria then that pixel is

assigned to the class that corresponds to those criteria. This process is referred as image

classification.

Presently, image classification method can be grouped into two main categories

depending on the image primitive i.e. pixel based and object based method. Pixel based methods

classify individual pixels without taking into account any neighborhood or spatial information of

the pixel. Object/ Region based methods are also able to handle high resolution imagery which

aggravates the classification process for most pixel based methods.


Depending on the type of information extracted from the original data, classes will be

identified with the known features on the ground. An example of a classified image is a land

cover map, showing vegetation, bare land, pasture, urban, etc. In remote sensing imagery, a pixel

might represent a mixture of class covers, within-class variability, or other complex surface

cover patterns that cannot be properly described by one class. Finding about vegetation indices

level is very important to know about the used lands and agricultural levels in the particular

region. To achieve this, the remote sensing image has to be taken for processing, in this work

LANDSAT image is taken and it is processed to identify the used land. In the processing initially

LANDSAT image is checked for noise freeness.

Using this image the required features are extracted. For this feature extraction the

different features like vegetation indices, used land, forest and unused land are considered. After

extracting the features from the image, classification algorithms are applied to get the different

classification groups, in this KNN, SVM, Fuzzy algorithms are applied to get the classified

image. These results were compared with the MOKNN and MOSVM. Modified algorithms

which give the better result comparing with the existing algorithms. To predict the overall

accuracy of the algorithms, different metrics are used like user’s accuracy, producer’s accuracy,

omission error and commission error.

1.3.1 Need of satellite image classification

Satellite image classification plays a major role in extract and interpretation of valuable

information from massive satellite images. Satellite image classification is required for:

 Spatial data mining [6]

 Extract information for an application

 Thematic map creation


 Visual and digital satellite image interpretation

 Field surveys

 Effective decision making

 Disaster management

1.3.2 Satellite image techniques

There are several methods and techniques for satellite image classification. Figure 1

shows hierarchy of satellite image classification methods. Satellite image classification methods

can be broadly classified into three categories:

 Automated

 Manual

 Hybrid
The diagrammatic representation for the satellite image classification has been shown below:

Satellite image classification method

Automatic Manual Hybrid

Supervised
Unsupervised

Artificial neural
network
ISO Data
Maximum
likelihood
Support Vector
Minimum Machine
distance

Parallel le K-Means
piped

Mahalanobis
Distance

K-Nearest
Neighbor

Decision Tree

Object Based
Image
Analysis
Segmentation

Semantic Based

Figure- 4 Satellite image classifications methods hierarchy


1. Automated

Automated satellite image classification methods uses algorithms that applied systematically

the entire satellite image to group pixels into meaningful categories. Majority of the

classification methods fall under this category. Automated satellite image classification methods

further classified into two categories 1) supervised 2) unsupervised classification methods.

 Supervised

Supervised classification methods require input from an analyst. The input from analyst is

known as training set. Training sample is the most important factor in the supervised satellite

image classification methods. Accuracy of the methods highly depends on the samples taken for

training. Training samples are two types, one used for classification and another for supervising

classification accuracy. Training set is provided before classification is run. A major supervised

classification method uses the following statistical techniques:

 Artificial Neural Network (ANN)

 Binary Decision Tree (BDT)

 Image Segmentation

Various classification techniques deal with different kinds of similarity matching

methods. Supervised classification includes additional functionality such as analyzing input data,

creating training samples and signature files, and determining the quality of the training samples

and signature files.

Artificial Neural Network: Algorithms fall under Artificial Neural Network (ANN)

simulate human learning process to associate the correct meaningful labels to image pixels.

Advantage of ANN based satellite image classification algorithms is easy to incorporate

supplementary data in the classification process and improves classification accuracy.


Binary Decision Tree: Binary Decision Tree (BDT) satellite image classification

algorithms are machine learning techniques. Decision tree technique includes a set of binary

rules that define meaningful classes to be associated to individual pixels. Different decision tree

software is available to generate binary rules. The software takes training set and supplementary

data to define effective rules.

Image Segmentation: Segmentation plays a vital role in satellite image processing,

analysis and pattern recognition. Satellite image segmentation techniques/algorithms are not

directly related to image classification. Image segmentation groups pixels which are relatively

homogeneous into segments. Image segmentation algorithms provide variables that support

analyst to specify relative size and shape of the segments. Segmented image can be classified at

segmentation level, instead of pixel level. Segmentation level satellite image classification

algorithms are much faster than pixel level classification methods.

 Unsupervised

Unsupervised classification technique uses clustering mechanisms to group satellite image pixels

into unlabeled classes/clusters. Later analyst assigns meaningful labels to the clusters and

produces well classified satellite image. Most common unsupervised satellite image

classification is ISODATA, Support Vector Machine (SVM) and K-Means.

2. Manual

Manual satellite image classification methods are robust, effective and efficient methods. But

manual methods consume more time. In manual methods the analyst must be familiar with the

area covered by the satellite image. Efficiency and accuracy of the classification, depends on

analyst knowledge and familiarity towards the field of study.


3. Hybrid

Hybrid satellite image classification methods combines the advantages of automated and

manual methods. Hybrid approach uses automated satellite image classification methods to do

initial classification, further manual methods are used to refine classification and correct errors.

1.4 OBJECTIVE:

 The primary objective of this research work is to get enhanced image, we present spatial

non -linear filtering technique such as Directional Filtering algorithm.

 To study the growth and status of urban sprawl in Salem city.

 To reduce noise and for region smoothing by mean shift filter and by pre-processing.

 To improve visual quality, with the help of denoised image, enhanced image obtained

from histogram equalization.

 To measure the performance of enhancement using quantitative performance measures

such as peak signal-to-noise ratio (PSNR), Mean Square Error (MSE) as well as in terms

of visual quality of the images using MATLAB.

1.5 ORGANIZATION OF THESIS

In the following manner, the thesis is organized

Chapter 1 discuses about the introduction of satellite image processing and resource

management system. And also presents the organization of thesis.

Chapter 2 presents a detailed literature review in the area of satellite image, noise signal,

transmission channel, image segmentation and classifiers. Analysis of survey, Motivation of

thesis and objective of thesis is discussed

Chapter 3 proposes about enhancement of satellite image using hierarchical histogram

equalization with denoising.


Chapter 4 presents image segmentation for satellite image, preprocessing, mean shift

filter and K-means clustering algorithm for segmentation of satellite image into various zones.

Chapter 5 presents a mapping & analyzing water resources in Salem city using GIS and

remote sensing.

Chapter 6 proposes an urban sprawl classification analysis using image processing

technique in geo information system.

Chapter 7 concludes the research and summarizes the contributions of the thesis.

Suggestions for future work are also included.

S-ar putea să vă placă și