Sunteți pe pagina 1din 49

ABSTRACT

Image Classification has a significant role in the field of medical diagnosis as well

as mining analysis and is even used for cancer diagnosis in the recent years. Clustering

analysis is a valuable and useful tool for image classification and object diagnosis. A

variety of clustering algorithms are available and still this is a topic of interest in the

image processing field. However, these clustering algorithms are confronted with

difficulties in meeting the optimum quality requirements, automation and robustness

requirements. In this paper, we propose two clustering algorithm combinations with

integration of K-Means algorithm that can tackle some of these problems. Comparison

study is made between these two novel combination algorithms. The experimental

results demonstrate that the proposed algorithms are very effective in producing

desired clusters of the given data sets as well as diagnosis. These algorithms are very

much useful for image classification as well as extraction of objects. Land use/land

cover (LU/LC) changes were determined in an urban area, Vellore by using

Geographical Information Systems (GISs) and remote sensing technology. These studies

were employed by using the Survey of India topographic map 57O/6 and the remote

sensing data of LISS III and PAN of IRS ID of 2003.The study area was classified into

eight categories on the basis of field study, geographical conditions, and remote sensing

data. The comparison of LU/LC in 1976 and 2003 derived from toposheet and satellite

imagery interpretation indicates that there is a significant increase in built-up area, open

forest, plantation, and other lands. It is also noted that substantial amount of agriculture

land, water spread area, and dense forest area vanished during the period of study
which may be due to rapid urbanization of the study area. No mining activities were

found in the study area in 1976, but a small addition of mining land was found in 2003.
INTRODUCTION

Today, the pull of cities, clusters, and mega-cities is quiet growing. More and

more people move to the urban centers of their country to participate in urban life,

hoping to earn more money than in the countryside. In the developing countries

urbanization is rapid for three reasons; population growth caused by natural increase,

migration toward urban areas and the reclassification of rural areas as urban centers [1].

Swift population progression and urbanization are of countless anxiety to the

sustainability of cities; thus, the more people on the earth, the greater the impact on the

environment and pressures on the resources [1]. Population growth, regional in-

migration, and increasing ecological problems require advanced methods for city

planners, to support viable development in these quickly changing districts [2]. A better

understanding of these impacts enhances estimating, modeling and forecasting

ecosystem dynamics from local to regional levels [3]. Fast urbanization, especially the

urban land expansion, and related problems like poverty, unemployment, poor sanitary

condition and environmental degradation endure thought provoking topics in most

countries. The rapid development of world-wide urbanization, and its related problems

and the information the planners will need (extent and spatial distribution of various

urban land uses, housing characteristics, population growth patterns and a

concentration of resources at the expense of the surrounding countryside) cannot be

analyzed without a systematic link between new technologies and in-situ observations
[1]. Because of its merit especially wide area exposure, abundant data, multispectral

resolution, remote sensing technology has been commonly applied to monitor and

analysis land use dynamics and urban expansion. Remote sensing and GIS are very

imperative kits to obtain precise and opportune information regarding the spatial

dispersal of land use and land cover over huge areas.

SIGNIFICANCE OF REMOTE SENSING TECHNOLOGY FOR URBAN

LANDSCAPE

Since the start of the first Earth Resources Technology Satellite in 1972, there has

been noteworthy bustle interconnected to mapping and monitoring environmental

change resulted from man-made activities and natural events [4,5]

Urban remote sensing, particularly the application of space borne sensors is a

newfangled issue for geographers. Before the introduction of space-borne platforms, the

airborne platforms were leading as the principal source of data for remote sensing

application. Nevertheless, recently the satellite-based sensors are contending. This

advance is the effect of methodological enhancements that now permit satellite remote

sensing systems to obtain imageries of high spatial resolution [1]. Aerial photography

has very long archived data records, while satellite remote sensing for Earth

observation started in 1972 with the first Landsat satellite. The most recent (since 1999)

generation of remote sensing satellites affords very high-spatial-resolution data

IKONOS (1 m) and Quick bird (0.61 m). Satellite data are nowadays delightful to meet

the mapping and monitoring which is important for of information for municipal
planning. In particular, as the spatial resolution of remote sensing satellites advances,

there is enhanced emphasis on applications for urban analysis. High-spatial resolution

data succor in the examination of less planned urban cores of older cities and the

expanding edge cities of developing nations [1,4,5]. Land use is a dynamic

phenomenon that modifies through time and space due to human-made pressure and

development. Appraising the present land use and its episodic change is convenient for

urban planners, policy makers and natural resource managers and remote sensing

offers an important Means of detecting and analyzing temporal changes [6]. The

considerate of the progress dynamics of the urban cluster and land use changes is

indispensable for ecologically achievable developmental planning. Thus, there is an

obvious need for ceaseless monitoring of the phenomena of growth and mapping and

scrutinizing LULC changes [4,5]. Defining the effects of land-use and land-cover change

on the Earth system be contingent on a sympathetic of past land-use rehearses,

contemporary land-use and land-cover patterns, and prognostications of forthcoming

land use and cover, as pretentious by human distribution, economic development,

technology and other factors [4,5,7,8]. Concerning urban the structure of and

composition, it can be realized there are many objects that made up of small different

material in the spatial arrangement that gives heterogeneous pixels in the earth

observation satellite imageries. Furthermore, urban landscapes have 3D components.

As one concerns factors affect urban remote sensing, it is necessary to examine

geometric resolution (to separate objects spatially), spectral and radiometric resolution

(to distinguish objects thematically) and temporal resolution (to get consistent image
material on a separate date [1,9,10] To analyze dynamic urban landscapes, tasks carry

out at different levels. Tasks at the lowest level of information needs are on single

blocks of buildings and require the largest scales (1:1000-1:5000) since individual

houses, roads, etc., have to be detected in detail. The medium level would emphasis on

a whole city and requires medium scales (1:10 000-1:25 000). The highest-level emphases

on regions, agglomerations and their surrounding area and does not want a detailed

differentiation inside the city, consequently, needing merely slight scales 1:50 000-1:100

000 [1].

LAND-COVER AND LAND-USE CHANGE DISCOVERY APPROACH

Satellite imageries have bees usually applied to examine the dynamic urban land

use change analysis. Urban land use monitoring encompasses the use of multi-temporal

images to detect the variation in land use due environmental situations and human

activities amongst the acquisition dates of images [9]. Land use and land cover is a

vigorous constituent in the interfaces of the human activities with the environment

understanding. Land use can be defined as the human activities towards the land.

Human uses of land for different activities such as for agriculture, urban development,

logging, grazing and mining among many others [11]. Land use associated with human

activity through an explicit portion of land [12], While the land cover, is defined as the

kind and state of vegetation such as forest, cropland, grass cover, wetland pastures,

roads, and urban area [11]. Land use entails divers land covers found on the earth's

surface and abstract notions forming mixture socio-cultural aspects having slight
physical important in reflectance properties and have limited relation in remote

sensing. Remote sensing data record the spectral properties of surface materials, and

hence, are further thoroughly related to land cover land use cannot be measured

directly by remote sensing, but rather requires visual interpretation or sophisticated

image processing and spatial pattern analyses to derive land use from total land-cover

information and other supplementary data [4,5].

STUDY AREA DESCRIPTION

The study area, Vellore region, is located nearby the metropolitan city, Chennai,

at a distance of about 145 km in southern peninsular India. Vellore is a world famous

historical at an altitude of 182.9m (13.05 N latitude and 79.05 E longitude) which

represents an urban area surrounded by major industrial and agricultural activities

along with dense forest.


2 DIGITAL IMAGE PROCESSING

An image may be defined as a two-dimensional function, f(x, y), where x and y are
spatial coordinates, and the amplitude off at any pair of coordinators (x, y) is called the
intensity or gray level of the image at that point. When x, y and the amplitude values
off are all finite, discrete quantities, we call the image a digital image [27]. A digital
image is composed of a finite number of elements, each of which has a particular
location and value. These elements are referred to as picture elements, image elements,
pets, and pixels. Pixel is the most widely used term.

Unlike humans, who are limited to the visual band of the Electromagnetic (EM)
spectrum, imaging machines cover almost the entire EM spectrum, ranging from
gamma to radio waves. They can operate also on images generated by sources that
humans are not accustomed to associating with images. These include ultrasound,
electron microscopy, and computer generated images. These include ultrasound,
electron microscopy, and computer generated images. Thus DIP encompasses a wide
and varied field of applications.

There are three types of computerized processes: Low-mid-high. Low-level


processes involve primitive operations such as reduce noise, contrast enhancement, and
image sharpening. Here both the input and output are images. Mid processes on
images involve tasks such segmentation (partitioning images into regions or objects),
description of those objects to reduce them to a form suitable for computer processing
and classification of individual objects. Here input is an image but the output is the
attributes extracted from the images [27]. High level involves "making sense" of an
ensemble of recognized objects, as in image analysis, performing cognitive functions
normally associated with human vision.

DIGITAL IMAGE REPRESENTATION


An image can be defined as f(x, y) where x, y is the spatial coordinates and amplitude is
the intensity of the image at that point. The term gray level is used often to refer to the
intensity of the monochrome images. Color images are formed by a combination of
individual 2-D images. For example in the RGB color system, a color image consists of
three (red, green, blue) individual component images. There are two steps involved to
convert continuous data into digital form.

Sampling and Quantization


The sampling rate determines the spatial resolution of the digitized image, while the
quantization level determines the number of gray levels in the digitized image. A
magnitude of the sampled image is expressed as a digital value in image processing.
The transition between continuous values of the image function and its digital
equivalent is called quantization [28]. The number of quantization levels should be high
enough for human perception of fine shading details in the image. The occurrence of
false contours is the main problem in the image which has been quantized with
insufficient brightness levels.

ELEMENTS OF DIGITAL IMAGE PROCESSING

An Interpretation and analysis of remote sensing data involve the identification and
measurement of various targets in an image in order to extract useful information about
them. There are two main methods can be used to interpret and extract information of
interpretation from images.

Visual interpretation of images is based on feature tone (color), pattern, shape, texture,
shadow and association. The identification of targets performed by a human.

Digital processing and analysis may be performed using a computer (without manual
intervention by a human interpreter). This method can be used to enhance data, to
correct or restore the image, to automatically identify targets and extract information
and to delineate different areas in an image into thematic classes.
FUNDAMENTAL STEPS OF DIGITAL IMAGE PROCESSING

There are some fundamental steps but as they fundamental, all these steps may have
sub- steps. The fundamental steps are described below with a neat diagram.

Outputs of these processes generally are images

Color Image Wavelets and Morphological


Processing multiresolution processing
processing Compression

Image
Processing

Segmentation

Image filtering
and
enhancement
Knowledge Base
Representation
& description

Image
acquisition Object
recognition

Problem Domain

Figure 1. Steps of Digital Image Processing


Figure 1 represents an overall input process and output process of an Image Attributes.

Image Acquisition
This is the first step or process of the fundamental steps of digital image processing.
After the image has been obtained, various methods of processing can be applied to the
image to perform the many different vision tasks required today. However, if the image
has not been acquired satisfactorily, then the intended tasks may not be achievable,
even with the aid of some form of image enhancement.
Image preprocessing
Image preprocessing can significantly increase the reliability of an optical inspection.
Several filter operations, which intensify are reducing certain image details enable an
easier or faster evaluation. Users are able to optimize a camera image with just a few
clicks.

Image Enhancement
Image enhancement is the modification of an image by changing the pixel brightness
values to improve its visual impact. It involves a collection of techniques that are used
to improve the visual appearance of an image or to convert the image to a form which is
better suited for human or machine interpretation.

Sometimes images obtained from satellites and conventional and digital cameras
lack in contrast and brightness because of the limitations of imaging subsystems and
illumination conditions while capturing the image. Images may have different types of
noise. In image enhancement, the goal is to accentuate certain image features for
subsequent analysis or for image display [28]. Examples include contrast and edge
enhancement, pseudo coloring, noise filtering, sharpening, and magnifying. Image
enhancement is useful in feature extraction, image analysis, and an image display. The
enhancement process itself does not increase the inherent information content in the
data. It simply emphasizes certain specified image characteristics. Enhancement
algorithms are generally interactive and application dependent. Some of the
enhancement techniques are Contrast Stretching, Noise Filtering, and Histogram
modification.

Contrast Stretching
Some images (eg. over water bodies, deserts, dense forests, snow, clouds and under
hazy conditions over heterogeneous regions) are homogeneous i.e., they do not have
much change in their levels. In terms of histogram representation, they are
characterized as the occurrence of very narrow peaks. The homogeneity can also be due
to the incorrect illumination of the scene [27]. Ultimately the images hence obtained are
not easily interpretable due to poor human perceptibility. This is because there exists
only a narrow range of graylevels in the image having provision for wider range of
graylevels. The contrast stretching methods are designed exclusively for frequently
encountered situations. Different stretching techniques have been developed to stretch
the narrow range to the whole of the available dynamic range.

Noise Filtering
Noise Filtering is used to filter the unnecessary information from an image. It is also
used to remove various types of noises from the images. Mostly this feature is
interactive. Various filters like low pass, high pass, mean, median etc., are available
[27].

Histogram Modification
The histogram has a lot of importance in image enhancement. It reflects the
characteristics of the image. By modifying the histogram, image characteristics can be
modified. One such example is Histogram Equalization. Histogram equalization is a
nonlinear stretch that redistributes pixel values so that there is approximately the same
number of pixels with each value within a range. The result approximates a flat
histogram. Therefore, contrast is increased at the peaks and lessened at the tails [27].
Image Restoration
Image restoration is the operation of taking a corrupted/noisy image and estimating
the clean original image. Corruption may come in many forms such as motion blur,
noise and camera misfocus. This may be considered as reversing the damage done to an
image by a known cause (Removing of blur caused by linear motion, removal of optical
distortions). It is concerned with filtering the observed image to minimize the effect of
degradations. Effectiveness of image restoration depends on the extent and accuracy of
the knowledge of degradation process as well as on filter design.

Color Image Processing


Color image processing is an area that has been gaining its importance because of the
significant increase in the use of digital images over the Internet. This may include color
modeling and processing in a digital domain etc.

Wavelets and Multiresolution Processing


Wavelets are the foundation for representing images in various degrees of resolution.
Images subdivision successively into smaller regions for data compression and for
pyramidal representation.

Compression
Compression is reduced to irrelevance and redundancy of the image data in order to be
able to store or transmit data in an efficient form. It is concerned with filtering the
observed image to minimize the number of bits required to represent an image. Image
compression may be lossy or lossless. Lossless compression is preferred for archival
purposes and often for medical imaging, technical drawings, clip art or comics. Lossy
compression methods especially used at low bit rates.

Figure 2 Image compression


Morphological Processing
Morphological processing deals with tools for extracting image components that are
useful in the representation and description of shape. Morphology refers to a broad
class of non-linear shape filters. Like the linear filters the operation is defined by a
matrix of elements applied to input image neighborhoods, but instead of a sum of
products, a minimum or maximum of sums is computed.

Segmentation
Segmentation procedures partition an image into its constituent parts or objects. In
general, autonomous segmentation is one of the most difficult tasks in digital image
processing. A rugged segmentation procedure brings the process a long way toward a
successful solution of imaging problems that require objects to be identified
individually.

Representation and Description


Representation and description almost always follow the output of a segmentation
stage, which usually is raw pixel data, constituting either the boundary of a region or
all the points in the region itself. Choosing a representation is only part of the solution
for transforming raw data into a form suitable for subsequent computer processing.
Description deals with extracting attributes that result in some quantitative information
of interest or are the basis for differentiating one class of objects from another.

Image recognition assigns a label to an object based on the information provided by its
descriptors. Image recognition is the process of identifying and detecting an object or a
feature in a digital image or video. It is used in many applications like systems for
factory automation, toll booth monitoring and security surveillance. Typical image
recognition algorithms include: License plate matching, scene change detection, face
recognition, pattern and gradient matching and optical character recognition. Specific
image recognition applications include classifying digits using HOG features and an
SVM classifier.

Image interpretation is the process of examining an aerial photo or digital remote


sensing image and manually identifying the features in that image. This method can be
highly reliable and a wide variety of features can be identified, such as riparian
vegetation type and condition, and anthropogenic features such as roads and mineral
extraction activity. However, the process is time consuming and requires a skilled
analyst who has a ground-level familiarity with the study area. Image interpretation is
based on elements that are inherent in imagery. These image characteristics (also called
image attributes) are comprised of seven elements that we use to derive information
about objects in an image. These image characteristics are: size, shape, tone/color,
texture, shadow, association, and pattern.

Image Classification
The simulation results showed that the proposed algorithm performs better with the
total transmission energy metric than the maximum number of hops metric. The
proposed algorithm provides energy efficient path for data transmission and
maximizes the lifetime of entire network. As the performance of the proposed
algorithm is analyzed between two metrics in future with some modifications in design
considerations the performance of the proposed algorithm can be compared with other
energy efficient algorithm. We have used very small network of 5 nodes, as number of
nodes increases the complexity will increase. We can increase the number of nodes and
analyze the performance. Image classification is the labeling of a pixel or a group of
pixels based on its grey value [29].

Classification is one of the most often used methods of information extraction. In


Classification, usually multiple features are used for a set of pixels i.e., many images of
a particular object are needed. In Remote Sensing area, this procedure assumes that the
imagery of a specific geographic area is collected in multiple regions of the
electromagnetic spectrum and is in good registration. Most of the information
extraction techniques rely on analysis of the spectral reflectance properties of such
imagery and employ special algorithms designed to perform various types of 'spectral
analysis'. The process of multispectral classification can be performed using either of
the two methods: Supervised and unsupervised [30].

In Supervised classification, the identity and location of some of the land cover
types such as urban, wetland, forest etc., are known as prior through a combination of
field works and top sheets. The analyst attempts to locate specific sites in the remotely
sensed data that represents homogeneous examples of these land cover types. These
areas are commonly referred as Training Sites because the spectral characteristics of
these known areas are used to 'train' the classification algorithm for eventual land cover
mapping of reminder of the image. Multivariate statistical parameters are calculated for
each training site. Every pixel both within and outside these training sites is then
evaluated and assigned to a class of which hit has the highest likelihood of being a
member [31].

In an unsupervised classification, the identities of land cover types has to be


specified as classes within a scene are not generally known as priori because ground
truth is lacking or surface features within the scene are not well defined. The computer
is required to group pixel data into different spectral classes according to some
statistically determined [1].

Figure 3 Image classification


Figure 1.3 sample image classification satellite image and sensor images. These two
images show how to classify an image in a digital image processing.

Knowledge Base
Knowledge may be as simple as detailing regions of an image where the information of
interest is known to be located, thus limiting the search that has to be conducted in
seeking that information. The knowledge base also can be quite complex, such as an
interrelated list of all major possible defects in a materials inspection problem or an
image database containing high-resolution satellite images of a region in connection
with change-detection applications.

Image processing in Future


In computer technology they all are fast development by revolution ignited by imaging.
Against common belief, computers are not able to match humans in calculation related
to image processing and analysis. But with increasing sophistication and power of the
modern computing, computation will go beyond conventional, Von Neumann
sequential architecture and would contemplate the optical execution too. Parallel and
distributed computing paradigms are anticipated to improve responses for the image
processing results.

COMPONENTS OF AN IMAGE PROCESSING SYSTEM


In sensing, two elements are required to acquire digital images. The first is a physical
device that is sensitive to the energy radiated by the object to image. The second called
a digitizer is a device for converting the output of the physical sensing device into
digital form. Specialized image processing hardware usually consists of the digitizer
plus hardware that performs other primitive operations such as arithmetic and logical
operations (ALU) and Noise reduction.
The computer is an image processing system is a general purpose to a
supercomputer. Mass storage capability is a must in image processing applications.
Image displays in use today are mainly color TV monitors. Hardcopy devices for
recording images include laser printers, film cameras, inkjet units and CDROM.
Networking for communication
Networks

Image Computer Mass storage


Display

Hardcopy Specialized Image


image processing
processing software
hardware

image
sensors

Problem Domain

Figure 1.4 Components of an image processing system

APPLICATIONS OF IMAGE PROCESSING


Visual information is the most important type of information perceived, processed and
interpreted by the human brain. One-third of the cortical area of the human brain is
dedicated to visual information processing. Digital image processing, as a computer-
based technology, carries out automatic processing, manipulation, and interpretation of
such visual information, and it plays an increasingly important role in many aspects of
our daily life, as well as in a wide variety of disciplines and fields of science and
technology, with applications such as television, photography, robotics, remote sensing,
medical diagnosis and industrial inspection.

Computerized photography, Space image processing (e.g., Hubble space telescope


images, interplanetary probe images). Medical/Biological image processing (e.g.,
interpretation of X-ray images, blood/cellular microscope images) Automatic character
recognition (zip code, license plate recognition), Fingerprint, face, iris recognition,
Remote sensing: aerial and satellite image interpretations Reconnaissance and
Industrial applications (e.g., product inspection/sorting).

GOALS OF IMAGE PROCESSING


The goals of image processing are divided into 5 groups.

1. Hallucination - monitor the objects that are not visible.


2. Image restoration and sharpening - For creating a better image.
3. Image repossession - search for the image of interest.
4. Measurement of pattern -Measures a range of objects in an image.
5. Image acknowledgment -differentiate the objects in an image.

Digital image processing allows the use of much more complex algorithms for image
processing, and hence, can offer both more sophisticated performances at simple task,
and the implementation of methods which would be impossible by analog means [33].
In particular, digital image processing is the only practical technology for classification,
feature extraction, pattern recognition, projection, multi-scale signal analysis. Some
techniques which are used in digital image processing include pixelization, linear
filtering, principal component analysis Independent component analysis, Hidden
Markov models, anisotropic diffusion, partial differential equations, self-organizing
maps, neural networks and wavelets [34].

MATLAB TOOLS

MATLAB (Matrix Laboratory) is a multi-paradigm numerical computing environment


and fourth-generation programming language. A proprietary programming language
developed by Math Works, MATLAB allows matrix manipulations, plotting of
functions and data, implementation of algorithms, the creation of user interfaces, and
interfacing with programs written in other languages, including C, C++, C#, Java,
Fortran, and Python.
MATLAB was first adopted by researchers and practitioners in control
engineering, Little's specialty, but quickly spread to many other domains. It is now also
used in education, in particular, the teaching of linear algebra, numerical analysis, and
is popular amongst scientists involved in image processing [35].

Applications of MAT LAB

Math and computation.


Algorithm development.
Modeling, simulation, and prototyping.
Data analysis, exploration, and visualization.
Scientific and engineering graphics.
Application development, including Graphical User Interface, building.

Literature of Review

This section illustrates few recent satellite image classification methods. J. Shabnam et

al., [11] introduced supervised satellite image classification method to classify very high

resolution satellite images into specific classes using fuzzy logic. This method classifies

satellite images into five major classes: shadow, vegetation, road, building and bare

land. This method uses image segmentation and fuzzy techniques for satellite image

classification. It applies two levels of segmentation, first level segmentation identifies

and classifies shadow, vegetation and road. Second level segmentation identifies

buildings. Further it uses contextual check to classify unclassified segments and regions.

Fuzzy techniques are used to improve the classification accuracy at the borders of

objects. [12] Presents a supervised satellite image classification method to determine

water, urban and green land on satellite images. This method takes training set for
every class and computes threshold value using k-means and LDA [13]techniques.

The method extracts low-level features from satellite images and applies k-means

algorithm to group into unlabeled clusters. Meaningful labels are assigned to the

unlabeled classes by comparing threshold values with extracted features. [14]

Describes ontology based supervised ocean satellite image classification method.

This method illustrates power of ontology in ocean satellite image classification. The

method extracts low level features from ocean satellite images and represent in owl

file format. This owl file is merged with domain ontologies and labeling rules.

Labeling rules, training rules, binary decision tree rules and expert rules are represent

using SWRL [15] language. The method produces classification results of given

ocean satellite image with the support of training, human expert, decision support

and labeling rules. [14] Also provides a tool as plug-in for protg ontology editor. The

tool supports ocean satellite images with the support of domain ontologies. S.

Muhammad et al., [1] proposed a supervised satellite image classification method

using decision tree technique. This method extracts features from satellite image

based on pixel color and intensity. Extracted features assist to determine objects

reside in the satellite images. The methods classifies satellite images using decision

tree with the support of identified objects. [16] Presents a method for the

classification of satellite images into multiple predefined land cover classes. This

method is automated and uses segment level classification with the support of

training set. The classification methods includes contextual properties of predefined

multiple classes to improve the classification accuracy. A. Selim [17] proposed a


classification method using Bayesian technique. The method uses spatial information

for classification of high-resolution satellite images. The method perform classification

in two phases. Phase 1: spectral and textural features are extracted for each pixel to train

Bayesian classifiers with discrete non-parametric density models. Phase 2: iterative

split-and-merge algorithm is used to convert the pixel level classification maps into

contiguous regions. ISODATA [9] technique is most common unsupervised satellite

classification method. It creates predefined number of unlabeled clusters/classes in a

satellite image. Later meaningful labels are assigned to the clusters. ISO DATA

parameters needs several parameters that control number of clusters and iterations to

be run. In few cases clu sters may contain pixels of different classes. In such

situations ISODATA uses cluster-busting [18] technique to label the complex

classes. K-Means [10] is a popular statistics and data mining technique. It

partitions n observations into k clusters based on Euclidean mean value. Advantages

with the K-Means technique are simple to process and fast execution. Limitation with

this method is analyst should know priori number of classes. Support Vector

Machine (SVM) [19] is a non- parametric unsupervised statistical classification

method. This method can be used to extract land-use map. SVM works on the

assumption that there is no information on how to distribute the overall data. SVM

reduces satellite classification cost, increases speed and improves accuracy. Minimum

distance [20] approach calculates mean spectra of each predefined class and assigns the

pixel to a group that has the least distance to mean. It easy to execute and simple to

process. But minimum distance method considers only mean value. Mahalanobis
distance method [21] is very similar to minimum distance method. It uses statistics

technique covariance matrix for satellite image classification. Parallelepiped [20]

executes based on parallelepiped-shaped boxes for each class. Parallelepiped

boundaries for each class are pre-determined. Pre-determined boundaries identifies

checks pixels of test images and determine class of the pixel. Parallelepiped method is

fast and easy to run, but overlap may produce false results.
3.IMAGE CLASSIFICATION TECHNIQUES

Classification between the objects is easy task for humans but it has proved to be

a complex problem for machines. The raise of high-capacity computers, the availability

of high quality and low-priced video cameras, and the increasing need for automatic

video analysis has generated an interest in object classification algorithms. A simple

classification system consists of a camera fixed high above the interested zone, where

images are captured and consequently processed. Classification includes image sensors,

image preprocessing, object detection, object segmentation, feature extraction and object

classification. Classification system consists of database that contains predefined

patterns that compares with detected object to classify in to proper category. Image

classification is an important and challenging task in various application domains,

including biomedical imaging, biometry, video surveillance, vehicle navigation,

industrial visual inspection, robot navigation, and remote sensing. Classification

process consists of following steps:

A. Pre-processing- atmospheric correction, noise removal, image transformation, main

component analysis etc.

B. Detection and extraction of a object- Detection includes detection of position and

other characteristics of moving object image obtained from camera. And in extraction,

from the detected object estimating the trajectory of the object in the image plane.

C. Training: Selection of the particular attribute which best describes the pattern.
D. Classification of the object-Object classification step categorizes detected objects

into predefined classes by using suitable method that compares the image patterns with

the target patterns.

IMAGE CLASSIFICATION APPROACHES

Various image classification approaches are defined briefly:

1)On The Basis Of Characteristic Used

These methods make use of the objects 2D spatial information. Common

features used in shape-based classification schemes are the points (centroid, set of

points), primitive geometric shapes (rectangle or ellipse), skeleton, silhouette and

contour.
2) On The Basis Of Training Sample Used:

The process of using samples of known informational classes (training sets) to

classify pixels of unknown identity. Example: minimum distance to means algorithm,

parallelepiped algorithm, maximum likelihood algorithm.

Unsupervised Classification

In this type of classification is a method which examines a large number of

unknown pixels and divides it into number of classes based on natural groupings

present in the image values. Computer determines spectrally separable class and then

defines their information value. No extensive prior knowledge is required. Example:

Kmeans clustering algorithm.

3) On The Basis Of Assumption Of Parameter on Data:

The parameters like mean vector and covariance matrix are used. There is an

assumption of Gaussian distribution. The parameters like mean vector and covariance

matrix are frequently generated from training samples. There is no assumption about

the data. Non-parametric classifiers do not make use of statistical parameters to

calculate class separation. Example: Artificial neural network, support vector machine,

decision tree classifier, expert system.

4) On The Basis Of Pixel Information Used:

Conventional classifier generates a signature by using the combination of the

spectra of all training-set pixels from a given feature. the contributions of all materials

present in the training-set pixels is present in the resulting signature. It can be

parametric or nonparametric the accuracy may not meet up because of the impact of the
mixed pixel problem. Example: maximum likelihood, ANN, support vector machine

and minimum distance.

Subpixel classifiers:

The spectral value of each pixel is assumed to be a linear or non-linear

combination of defined pure materials called end members, providing proportional

membership of each pixel to each end member. Subpixel classifier has the capability to

handle the mixed pixel problem, suitable for medium and coarse spatial resolution

images. Example: spectral mixture analysis, subpixel classifier, Fuzzy-set classifiers.

Per-field classifier:

The per-field classifier is intended to handle the problem of environmental

heterogeneity, and also improves the classification accuracy. Generally used by GIS-

based classification approaches.

Object-oriented classifiers:

Pixels of the image are united into objects and then classification is performed on

the basis of objects. It involves 2 stages: image segmentation and image classification

Image segmentation unites pixels into objects, and a classification is then implemented

on the basis of objects. Example: e Cognition.

5) On The Basis Of Number Of Outputs For Each

Also known as crisp classification In this each pixel is required or forced to show

membership to a single class.eg maximum likelihood, minimum distance, artificial

neural network, decision tree, and support vector machine.


6) On The Basis Of Spatial Information

This image classification uses pure spectral information .Example: Maximum

likelihood, minimum distance, artificial neural network. This image classification uses

the spatially neighbouring pixel information. Example: frequency-based contextual

classifier. This classification uses both spectral and spatial information initial

classification images are generated using parametric or non-parametric classifiers and

then contextual classifiers are implemented in the classified images. Example:

combination of parametric or non-parametric and contextual algorithms.


3. PROPOSED ALGORITHM

The basic principle of the proposed algorithm is integrating the K-Means algorithm

with LoG filter and Prewitt filter as follows. Algorithm: 1

Step 1: Read the RGB Image available for classification.

Step 2: Convert available image from RGB color space to L*a*b* color Space

Step 3: Classify the colors in 'a*b*' space using K-means clustering algorithm

Step 4: Label every pixel of the image using the results from K-means algorithm

Step 5: Create images that segment the image by color.

Step 6: Segment the nuclei of the image into a separate image

Step 7: The Laplacian of Gaussian filter finds edges by looking for zero crossings.

Algorithm: 2

Step 1: Do the Step 1 to 6 proposed in the algorithm 1

Step 2: Prewitt Filter finds edges by looking for zero crossings.

The proposed algorithms consist of two phases. In the first phase, we construct the

clusters using the K-Means algorithm. If k and d are fixed as per the algorithm, the

problem can be exactly solved in time O (ndk+1 log n), where n is the number of entities

to be clustered. In the second phase we find the classified image by the way of filtering

the clusters through a Laplacian and Gaussian filter or through a Prewitt Filter.

Laplacian of Gaussian:

A 2-D isotropic measure of the 2nd spatial derivative of an image is called as

Laplacian. The Laplacian of an image showcase the regions of rapid intensity change.

For this specific property this algorithm is popularly used for edge detection. For
example in zero crossing edge detectors, this algorithm is well placed. We need to pre

format the image in order to reduce its sensitivity to noise. The image is smoothed by

approximating a Gaussian smoothing filter. The kernels are very much sensitive to

noise since they are approximating to a second derivative measurement on the image.

To overcome this, the images are smoothed with Gaussian filter before applying the

Laplacian filter. While smoothing the noise components are removed as much as

possible. This mainly eliminates the high frequency noise components. Gaussian

smoothing filter will be convolved with the Laplacian filter, and then convolve this

hybrid filter with the given image to achieve the required result for the following

advantages. This method requires far fewer arithmetic operations and only one

convolution needs to be performed at run-time on the image.

Prewitt Filter

The Prewitt Filter is another well known edge detection filter uses two 3X 3

kernels. One kernel is used for the changes in the horizontal direction and another

kernel is for the changes in the vertical direction. The below example shows the

calculation of the value Gx:

Fig Input Image


Fig Output Image

Data and Methodology

In the present study we have used mainly two types of data. These are

topographic map and remote sensing data. The remote sensing data of geo referenced

and merged data of LISS III and PAN of IRS ID of 2013 in the digital mode are obtained

from the National Remote Sensing Agency (NRSA), Government of India, Hyderabad,

and used. The spatial resolutions of LISS III and PAN are 23.5 and 5.8 meters, and

spectral resolutions are 4 and 1 meters, respectively. The topographic map 57 O/6

(1:50,000 scale) is obtained from the Survey of India, Hyderabad ; it is converted to

digital mode using scanning. The topographic map is geo referenced with longitude

and latitudes using the ArcGIS and Matlab software and spatial analyst tools and

demarcated the boundary of study area. A supervised signature extraction with the

maximum likelihood algorithm was employed to classify the digital data of IRS 1D geo
referenced and merged LISS III and PAN for land use/land cover mapping for the year

2003. Before the preprocessing and classification of satellite imagery began, an extensive

field survey was performed throughout the study area using Global Positioning System

(GPS) equipment. This survey was performed in order to obtain accurate location point

data for each land use and land cover class included in the classification scheme as well

as for the creation of training sites and for signature generation.

The satellite data was enhanced before classification using histogram

equalization in ERDAS Imagine to improve the image quality and to achieve better

classification accuracy. In supervised classification, spectral signatures are developed

from specified locations in the image. These specified locations are given the generic

name training sites and are defined by the user. Generally a vector layer is digitized

over the raster scene. The vector layer consists of various polygons overlaying different

land use types. The training sites will help to develop spectral signatures for the

outlined areas. The land use maps pertaining of two different periods were used for

post classification comparison, which facilitated the estimation of changes in the land

use category and dynamism with the changes. Post classification comparison is the

most commonly used quantitative method of change detection with fairly good results.

Post classification comparison is sometimes referred to as delta classification. It

involves independently produced spectral classification results from different data sets,

followed by a pixel-bypixel or segment-by-segment comparison to detect changes in the

classes. The detailed methodology adopted.


Results and Discussion
The K Means algorithm using MATLAB 12. The experiments were performed on

a Intel Core 2 Duo Processor machine with T9400 chipset, 2.53 GHz CPU and 4 GB

RAM running on the platform Microsoft Windows 7. To investigate, we first run the K

Means algorithm and the input image with region of interest is divided into various

clusters.

Original Image
Objects in First cluster

Objects in second cluster


Objects in third cluster

Output of Prewitt Filter


Knowledge about land use/land cover has become important to overcome the

problem of biogeochemical cycles, loss of Loading, preprocessing Geo referencing

Image processing Field checks Land use/land cover change detection Geo referencing

Mosaicking Methodology Land use/land cover map 1976 Supervised classification

Land use/land cover map 2003 Final rectified topo sheet IRS imagery 2003 SOI

oposheet, 1976 Figure 2: Flow chart of methodology for land use/land cover and

change detection. The main reason behind the LU/LC changes includes rapid

population growth, rural-to-urban migration, and reclassification of rural areas as

urban areas, lack of valuation of ecological services, poverty, ignorance of biophysical

limitations, and use of ecologically incompatible technologies. Present study area

Vellore is a rapid developing town. During the past few decades, the study area has

witnessed substantial increase in population , economic growth, and industrialization,

and transportation activities have negative impact on the environmental health of the

region. Due to involvement of multiple data sets, we used latest technologies like

remote sensing and GIS to quantify LU/LC. On the basis of interpretation of remote

sensing imagery, field surveys, and existing study area conditions, we have classified

the study area into eight categories, that is, agriculture, built-up area, dense forest,

mining, open forest, other land, plantation, and water spread area. Thestudy area

covers 125 km2 and LU/LC changes were estimated from 1976 to 2014.
Figure : Land use/land cover
The LU/LC changes were of highest amount in agriculture built-up area,

plantation, other land, and dense forest from 1976 to 2003. Comparison of LU/LC in

1976 and 2003 derived from toposheet and satellite imagery interpretation indicates that

the built-up area, comprising human habitation developed for nonagricultural uses like

building, transport, and communications is largely broadened from 5.91 km2 (1976) to

18.34 km2 (2003), with a net addition of 12.44 km2. This is due to urban expansion and

population increase in this study area during the study period. The agricultural lands

which are used for paddy and production of food, vegetables, and other mixed varieties

like mango, coconuts, and other homestead trees are largely decreased from 68.23 km2

(1976) to 21.45 km2 (2003), with net decrease of 46.78km2. Thestudy area witnessed

large amount of agriculture land converted into settlements and other urban

development activities.

Water spread area, both man-made and natural water features such as

rivers/streams, tanks, and reservoirs, also decreased from 12.09 km2 in 1976 to 9.91

km2 in 2003, with net decline of 2.18 km2. Water spread area decrease is occurred due

to the gradual conversion of water spread area into built-up area or human

developmental area as the population increased significantly during the past decades.

Dense forest comprising all land with tree cover of canopy density of 70% and above is

significantly declined from 1976 (22.35 km2) to 2003 (4.25km2) with a net decrease of

18.10 km2.This is attributed to conversion of forest lands into urban areas and other

development activities.
Open forest land comprising all lands with tree cover of canopy density between 10%

and 40% is not found in 1976, whereas there is a significant addition of 10.90 km2 of

land in 2003 which is due to implementation of afforestation works by Vellore

municipality during the period of 20012003 under Haritha project .

The plantation land which includes agricultural tree crops and other horticulture

nurseries also increased from 0.79 km2 (1976) to 21.80 km2 (2003), with a net increase of

21.01 km2. The other land consisting of roads, mostly link roads, joining the village

settlement and barren land with or without scrub and sandy area is largely broadened

from 15.64 km2 (1976) to 38.22 km2 (2003) with a net increase of 22.54 km2. In 1976 no

mining activities were found in the study area, but a small addition of 0.13 km2 mining

land was found in 2003.


Figure : Land use/land cover
Land use/land cover changes
CONCLUSION
K Means algorithm with Laplacian of Gaussian algorithm and compared that
with the combination of K Means algorithm with Prewitt Filter. The integrated novel
clustering algorithms for image classification are tested with different images including
satellite images. We found that the later is performing well compared to the earlier.
These algorithms are robust and very effective in producing desired classifications
especially in the field of pattern recognition as per the region of interest as
demonstrated by the experimental results. Our results clearly show that LU/LC
changes were significant during the period from 1976 to 2003. There is significant
expansion of built-up area noticed. On the other hand there is decrease in agricultural
area, water spread area, and forest areas. This study clearly indicates the significant
impact of population and its development activities on LU/LC change.This study
proves that integration of GIS and remote sensing technologies is effective tool for
urban planning and management.
REFERENCES
1. Maktav D, Erbek FS, Jrgens C (2005) Remote sensing of urban areas.
International Journal of Remote Sensing 26: 655-659.
2. Herold M, Scepan J, Clarke K (2002) The use of remote sensing and landscape
metrics to describe structures and changes in urban land uses. Environment and
Planning 34: 1443-1458.
3. Alphan H, Doygun H, Unlukaplan YI (2009) Post-classification comparison of
land cover using multitemporal Landsat and ASTER imagery: the case of
Kahramanmara, Turkey. Environ Monit Assess 151:
327-336.
4. Treitz P, Rogan J (2004) Remote sensing for mapping and monitoring land
cover and land-use change. Progress in Planning 61: 269-279.
5. Treitz P, Rogan J (2004) Remote sensing for mapping and monitoring land
cover and land-use change-an introduction. Progress in Planning 61:269-279.
6. Tahir M, Imam E, Hussain T (2013) Evaluation of land use/land cover changes
in Mekelle City, Ethiopia using Remote Sensing and GIS. Computational Ecology and
Software 3: 9-16.
7. Hadeel AS, Jabbar MT, Chen X (2009) Application of remote sensing and GIS
to the study of land use/cover change and urbanization expansion in Basrah province,
southern Iraq. Geospatial Information Science 12:
135-141.
8. Liu M, Hu Y, Zhang W, Zhu J, Chen H, et al. (2011) Application of landuse
change model in guiding regional planning: A case study in Hun-Taizi River
Watershed, Northeast China. Chin Geogr Sci 21: 609.
9. Singh A (1989) Review Article Digital change detection techniques using
remotely-sensed data. International Journal of Remote Sensing 10: 989-1003.
10. Treitz PM, Howarth PJ, Gong P (1992) Application of satellite and GIS
technologies for land-cover and land-use mapping at the rural-urban fringe: a case
study. Photogrammetric Engineering and Remote Sensing
58: 439-448.
11. Balakeristanan ML, Md Said MA (2012) Land Use Land Cover Change
Detection Using Remote Sensing Application for Land Sustainability. American
Institute of Physics 1482: 425-430.
12. Lillesand TM, Kiefer RW, Chipman JW (2004) Remote Sensing and Image
Interpretation. John Wiley & Sons, New Jersey, USA.
13. Chavez PS (1996) Image-Based Atmospheric Corrections-Revisited and
Improved. Photogrammetric Engineering & Remote Sensing 62:1025-1036.
14. Jin-Song D, Ke W, Jun L, Yan-Hua D (2009) Urban Land Use Change
Detection Using Multisensor Satellite Images. Pedosphere 19: 96-103.
15. Tucker M, Asik O (2002) Detecting Land Use Changes at the Urban Fringe
from Remotely Sensed Images in Ankara, Turkey. Geocarto International 17: 47-52.
16. Muttitanon W, Tripathi NK (2005) Land use/land cover changes in the
coastal zone of Ban Don Bay, Thailand using Landsat 5 TM data. International Journal
of Remote Sensing 26: 2311-2323.
17. Mas JF (1999) Monitoring land-cover changes A comparison of change
detection techniques. International Journal of Remote Sensing 20:139-152.
18. Shalaby A, Tateishi R (2007) Remote sensing and GIS for mapping and
monitoring land cover and land-use changes in the Northwestern coastal zone of Egypt.
Applied Geography 27: 28-41.
19. Lu D, Weng Q (2007) A survey of image classification methods and
techniques for improving classification performance. International Journal of Remote
Sensing, 28: 823-870.
20. Lu D, Amerasinghe P, Brondzio E, Moran E (2004) Change detection
techniques. International Journal of Remote Sensing, 25: 2365-2401.
21. Gao J (2009) Digital Analysis of Remotely Sensed Imagery. McGraw-Hill
Companies, Inc, New York, USA.
22. Anderson JR, Hardyhardy EE, Roach JT (1976) A Land Use and Land Cover
Classification System for Use with Remote Sensor Data. United state government
printing office, Washington, USA.
23. Liu JG, J Mason PJ (2013) Essential Image Processing and GIS for Remote
Sensing. John Wiley & Sons, New Jersey, USA.
24. Binyam B, Garedew E, Eshetu Z, Kassa H (2015) Land Use and Land Cover
Changes and Associated Driving Forces in North Western Lowlands of Ethiopia.
International Research Journal of Agricultural
Science and Soil Science 5: 28-44.
25. Shalaby A, Tateishi R (2007) Remote sensing and GIS for mapping and
monitoring land cover and land-use changes in the Northwestern coastal zone of Egypt.
Applied Geography 27: 28-41.
26. Mundia CN, Aniya M (2005) Analysis of land use/cover changes and urban
expansion of Nairobi city using remote sensing and GIS. International Journal of
Remote Sensing 26: 2831-2849.
27. Dewan AM, Yamaguchi Y (2009) Using remote sensing and GIS to detect and
monitor land use and land cover change in Dhaka Metropolitan of Bangladesh during
19602005. Environmental Monitoring and
Assessment 150: 237.
28. Yuan F, Sawaya KE, Loeffelholz B, Bauer ME (2005) Land cover classification
and change analysis of the Twin Cities (Minnesota) Metropolitan Area by
multitemporal Landsat remote sensing. Remote
Sensing of Environment 98: 317-328.
29. Reis S (2008) Analyzing Land Use/Land Cover Changes Using Remote
Sensing and GIS in Rize, North-East Turkey. Sensors 8: 6188-6202.
30. Paul OV (2007) Remote Sensing: New Applications for Urban Areas. The
point of View 95: 2267-2268.
31. Hu X, Zhou W, Qian Y, Yu W (2017) Urban expansion and local landcover
change both significantly contribute to urban warming, but their relative importance
changes over time. Landscape Ecol 32: 763-780.
APPENDIX
Source
close all
clear all
%read the source rgb image
x=imread('d:\13.jpg');
figure;imshow(x);
[r, c, s]=size(x);
%initialize storage for each sample regions
classes={'red','black','blue','yellow','green','background'};
nclasses=length(classes);
sample_regions=false([r c nclasses]);
%select each sample region
f=figure;
for count=1:nclasses
set(f,'name',['select sample region for' classes{count}]);
sample_regions(:,:,count)=roipoly(x);
end
close(f);
%display sample regions
for count=1:nclasses
figure
imshow(sample_regions(:,:,count))

title(['sample region for' classes{count}]);


end
%convert the RGB image into on L*a*b* image
cform=makecform('srgb2lab');
lab_x=applycform(x,cform);
%calculate the mean a and b value for each ROI area
a=lab_x(:,:,2);
b=lab_x(:,:,3);
color_markers=zeros([nclasses,2]);
for count=1:nclasses
color_markers(count,1)=mean2(a(sample_regions(:,:,count)));
color_markers(count,2)=mean2(b(sample_regions(:,:,count)));
end
%%step3:classify each pixel using the nearest neighbour rule
%each color marker now has an 'a* and 'b* value. you can classify each
%pixel in the |lab_x|image by calculating the euclidean distance between
%that pixel and each color marker s.the smallest distance will tell ou
%that the pixel most closely matches that color marker.
%for example,if the distance between apixel and the red color marker is the
%smallest, then the pixel would be labeled as a red pixel
color_labels = 0:nclasses-1;
a=double(a);
b=double(b);
distance=repmat(0,[size(a),nclasses]);
%perform classification
for count = 1:nclasses
distance(:,:,count)=((a-color_markers(count,1)).^2+...
(b-color_markers(count,2)).^2).^0.5;
end
[value,label]=min(distance,[],3);
label=color_labels(label);
%clear value distance
colors=[255 0 0; 0 255 0;0 0 255;255 255 0;255 0 255;0 255 255];
y=zeros(size(x));
i=double(label)+1;
for m=1:r
for n=1:c
y(m,n,:)=colors(i(m,n),:);
end
end
figure;imshow(y)
colorbar
%scatter plot for the nearest neighbor classification
purple=[119/255 73/255 152/255];
plot_lables={'k','r','g',purple,'n','y'};
figure
for count=1:nclasses
plot(a(label==count-1),b(label==count-1),',''markeredgecolor',...
plot_labels{count},'markerfacecolor',plot_labels{count});
hold on;
end
title('Scattered Plot ');
xlabel('"a*"value');
ylabel('"b*"value');

S-ar putea să vă placă și