Documente Academic
Documente Profesional
Documente Cultură
Image Classification has a significant role in the field of medical diagnosis as well
as mining analysis and is even used for cancer diagnosis in the recent years. Clustering
analysis is a valuable and useful tool for image classification and object diagnosis. A
variety of clustering algorithms are available and still this is a topic of interest in the
image processing field. However, these clustering algorithms are confronted with
integration of K-Means algorithm that can tackle some of these problems. Comparison
study is made between these two novel combination algorithms. The experimental
results demonstrate that the proposed algorithms are very effective in producing
desired clusters of the given data sets as well as diagnosis. These algorithms are very
much useful for image classification as well as extraction of objects. Land use/land
Geographical Information Systems (GISs) and remote sensing technology. These studies
were employed by using the Survey of India topographic map 57O/6 and the remote
sensing data of LISS III and PAN of IRS ID of 2003.The study area was classified into
eight categories on the basis of field study, geographical conditions, and remote sensing
data. The comparison of LU/LC in 1976 and 2003 derived from toposheet and satellite
imagery interpretation indicates that there is a significant increase in built-up area, open
forest, plantation, and other lands. It is also noted that substantial amount of agriculture
land, water spread area, and dense forest area vanished during the period of study
which may be due to rapid urbanization of the study area. No mining activities were
found in the study area in 1976, but a small addition of mining land was found in 2003.
INTRODUCTION
Today, the pull of cities, clusters, and mega-cities is quiet growing. More and
more people move to the urban centers of their country to participate in urban life,
hoping to earn more money than in the countryside. In the developing countries
urbanization is rapid for three reasons; population growth caused by natural increase,
migration toward urban areas and the reclassification of rural areas as urban centers [1].
sustainability of cities; thus, the more people on the earth, the greater the impact on the
environment and pressures on the resources [1]. Population growth, regional in-
migration, and increasing ecological problems require advanced methods for city
planners, to support viable development in these quickly changing districts [2]. A better
ecosystem dynamics from local to regional levels [3]. Fast urbanization, especially the
urban land expansion, and related problems like poverty, unemployment, poor sanitary
countries. The rapid development of world-wide urbanization, and its related problems
and the information the planners will need (extent and spatial distribution of various
analyzed without a systematic link between new technologies and in-situ observations
[1]. Because of its merit especially wide area exposure, abundant data, multispectral
resolution, remote sensing technology has been commonly applied to monitor and
analysis land use dynamics and urban expansion. Remote sensing and GIS are very
imperative kits to obtain precise and opportune information regarding the spatial
LANDSCAPE
Since the start of the first Earth Resources Technology Satellite in 1972, there has
newfangled issue for geographers. Before the introduction of space-borne platforms, the
airborne platforms were leading as the principal source of data for remote sensing
advance is the effect of methodological enhancements that now permit satellite remote
sensing systems to obtain imageries of high spatial resolution [1]. Aerial photography
has very long archived data records, while satellite remote sensing for Earth
observation started in 1972 with the first Landsat satellite. The most recent (since 1999)
IKONOS (1 m) and Quick bird (0.61 m). Satellite data are nowadays delightful to meet
the mapping and monitoring which is important for of information for municipal
planning. In particular, as the spatial resolution of remote sensing satellites advances,
data succor in the examination of less planned urban cores of older cities and the
phenomenon that modifies through time and space due to human-made pressure and
development. Appraising the present land use and its episodic change is convenient for
urban planners, policy makers and natural resource managers and remote sensing
offers an important Means of detecting and analyzing temporal changes [6]. The
considerate of the progress dynamics of the urban cluster and land use changes is
obvious need for ceaseless monitoring of the phenomena of growth and mapping and
scrutinizing LULC changes [4,5]. Defining the effects of land-use and land-cover change
technology and other factors [4,5,7,8]. Concerning urban the structure of and
composition, it can be realized there are many objects that made up of small different
material in the spatial arrangement that gives heterogeneous pixels in the earth
geometric resolution (to separate objects spatially), spectral and radiometric resolution
(to distinguish objects thematically) and temporal resolution (to get consistent image
material on a separate date [1,9,10] To analyze dynamic urban landscapes, tasks carry
out at different levels. Tasks at the lowest level of information needs are on single
blocks of buildings and require the largest scales (1:1000-1:5000) since individual
houses, roads, etc., have to be detected in detail. The medium level would emphasis on
a whole city and requires medium scales (1:10 000-1:25 000). The highest-level emphases
on regions, agglomerations and their surrounding area and does not want a detailed
differentiation inside the city, consequently, needing merely slight scales 1:50 000-1:100
000 [1].
Satellite imageries have bees usually applied to examine the dynamic urban land
use change analysis. Urban land use monitoring encompasses the use of multi-temporal
images to detect the variation in land use due environmental situations and human
activities amongst the acquisition dates of images [9]. Land use and land cover is a
vigorous constituent in the interfaces of the human activities with the environment
understanding. Land use can be defined as the human activities towards the land.
Human uses of land for different activities such as for agriculture, urban development,
logging, grazing and mining among many others [11]. Land use associated with human
activity through an explicit portion of land [12], While the land cover, is defined as the
kind and state of vegetation such as forest, cropland, grass cover, wetland pastures,
roads, and urban area [11]. Land use entails divers land covers found on the earth's
surface and abstract notions forming mixture socio-cultural aspects having slight
physical important in reflectance properties and have limited relation in remote
sensing. Remote sensing data record the spectral properties of surface materials, and
hence, are further thoroughly related to land cover land use cannot be measured
image processing and spatial pattern analyses to derive land use from total land-cover
The study area, Vellore region, is located nearby the metropolitan city, Chennai,
An image may be defined as a two-dimensional function, f(x, y), where x and y are
spatial coordinates, and the amplitude off at any pair of coordinators (x, y) is called the
intensity or gray level of the image at that point. When x, y and the amplitude values
off are all finite, discrete quantities, we call the image a digital image [27]. A digital
image is composed of a finite number of elements, each of which has a particular
location and value. These elements are referred to as picture elements, image elements,
pets, and pixels. Pixel is the most widely used term.
Unlike humans, who are limited to the visual band of the Electromagnetic (EM)
spectrum, imaging machines cover almost the entire EM spectrum, ranging from
gamma to radio waves. They can operate also on images generated by sources that
humans are not accustomed to associating with images. These include ultrasound,
electron microscopy, and computer generated images. These include ultrasound,
electron microscopy, and computer generated images. Thus DIP encompasses a wide
and varied field of applications.
An Interpretation and analysis of remote sensing data involve the identification and
measurement of various targets in an image in order to extract useful information about
them. There are two main methods can be used to interpret and extract information of
interpretation from images.
Visual interpretation of images is based on feature tone (color), pattern, shape, texture,
shadow and association. The identification of targets performed by a human.
Digital processing and analysis may be performed using a computer (without manual
intervention by a human interpreter). This method can be used to enhance data, to
correct or restore the image, to automatically identify targets and extract information
and to delineate different areas in an image into thematic classes.
FUNDAMENTAL STEPS OF DIGITAL IMAGE PROCESSING
There are some fundamental steps but as they fundamental, all these steps may have
sub- steps. The fundamental steps are described below with a neat diagram.
Image
Processing
Segmentation
Image filtering
and
enhancement
Knowledge Base
Representation
& description
Image
acquisition Object
recognition
Problem Domain
Image Acquisition
This is the first step or process of the fundamental steps of digital image processing.
After the image has been obtained, various methods of processing can be applied to the
image to perform the many different vision tasks required today. However, if the image
has not been acquired satisfactorily, then the intended tasks may not be achievable,
even with the aid of some form of image enhancement.
Image preprocessing
Image preprocessing can significantly increase the reliability of an optical inspection.
Several filter operations, which intensify are reducing certain image details enable an
easier or faster evaluation. Users are able to optimize a camera image with just a few
clicks.
Image Enhancement
Image enhancement is the modification of an image by changing the pixel brightness
values to improve its visual impact. It involves a collection of techniques that are used
to improve the visual appearance of an image or to convert the image to a form which is
better suited for human or machine interpretation.
Sometimes images obtained from satellites and conventional and digital cameras
lack in contrast and brightness because of the limitations of imaging subsystems and
illumination conditions while capturing the image. Images may have different types of
noise. In image enhancement, the goal is to accentuate certain image features for
subsequent analysis or for image display [28]. Examples include contrast and edge
enhancement, pseudo coloring, noise filtering, sharpening, and magnifying. Image
enhancement is useful in feature extraction, image analysis, and an image display. The
enhancement process itself does not increase the inherent information content in the
data. It simply emphasizes certain specified image characteristics. Enhancement
algorithms are generally interactive and application dependent. Some of the
enhancement techniques are Contrast Stretching, Noise Filtering, and Histogram
modification.
Contrast Stretching
Some images (eg. over water bodies, deserts, dense forests, snow, clouds and under
hazy conditions over heterogeneous regions) are homogeneous i.e., they do not have
much change in their levels. In terms of histogram representation, they are
characterized as the occurrence of very narrow peaks. The homogeneity can also be due
to the incorrect illumination of the scene [27]. Ultimately the images hence obtained are
not easily interpretable due to poor human perceptibility. This is because there exists
only a narrow range of graylevels in the image having provision for wider range of
graylevels. The contrast stretching methods are designed exclusively for frequently
encountered situations. Different stretching techniques have been developed to stretch
the narrow range to the whole of the available dynamic range.
Noise Filtering
Noise Filtering is used to filter the unnecessary information from an image. It is also
used to remove various types of noises from the images. Mostly this feature is
interactive. Various filters like low pass, high pass, mean, median etc., are available
[27].
Histogram Modification
The histogram has a lot of importance in image enhancement. It reflects the
characteristics of the image. By modifying the histogram, image characteristics can be
modified. One such example is Histogram Equalization. Histogram equalization is a
nonlinear stretch that redistributes pixel values so that there is approximately the same
number of pixels with each value within a range. The result approximates a flat
histogram. Therefore, contrast is increased at the peaks and lessened at the tails [27].
Image Restoration
Image restoration is the operation of taking a corrupted/noisy image and estimating
the clean original image. Corruption may come in many forms such as motion blur,
noise and camera misfocus. This may be considered as reversing the damage done to an
image by a known cause (Removing of blur caused by linear motion, removal of optical
distortions). It is concerned with filtering the observed image to minimize the effect of
degradations. Effectiveness of image restoration depends on the extent and accuracy of
the knowledge of degradation process as well as on filter design.
Compression
Compression is reduced to irrelevance and redundancy of the image data in order to be
able to store or transmit data in an efficient form. It is concerned with filtering the
observed image to minimize the number of bits required to represent an image. Image
compression may be lossy or lossless. Lossless compression is preferred for archival
purposes and often for medical imaging, technical drawings, clip art or comics. Lossy
compression methods especially used at low bit rates.
Segmentation
Segmentation procedures partition an image into its constituent parts or objects. In
general, autonomous segmentation is one of the most difficult tasks in digital image
processing. A rugged segmentation procedure brings the process a long way toward a
successful solution of imaging problems that require objects to be identified
individually.
Image recognition assigns a label to an object based on the information provided by its
descriptors. Image recognition is the process of identifying and detecting an object or a
feature in a digital image or video. It is used in many applications like systems for
factory automation, toll booth monitoring and security surveillance. Typical image
recognition algorithms include: License plate matching, scene change detection, face
recognition, pattern and gradient matching and optical character recognition. Specific
image recognition applications include classifying digits using HOG features and an
SVM classifier.
Image Classification
The simulation results showed that the proposed algorithm performs better with the
total transmission energy metric than the maximum number of hops metric. The
proposed algorithm provides energy efficient path for data transmission and
maximizes the lifetime of entire network. As the performance of the proposed
algorithm is analyzed between two metrics in future with some modifications in design
considerations the performance of the proposed algorithm can be compared with other
energy efficient algorithm. We have used very small network of 5 nodes, as number of
nodes increases the complexity will increase. We can increase the number of nodes and
analyze the performance. Image classification is the labeling of a pixel or a group of
pixels based on its grey value [29].
In Supervised classification, the identity and location of some of the land cover
types such as urban, wetland, forest etc., are known as prior through a combination of
field works and top sheets. The analyst attempts to locate specific sites in the remotely
sensed data that represents homogeneous examples of these land cover types. These
areas are commonly referred as Training Sites because the spectral characteristics of
these known areas are used to 'train' the classification algorithm for eventual land cover
mapping of reminder of the image. Multivariate statistical parameters are calculated for
each training site. Every pixel both within and outside these training sites is then
evaluated and assigned to a class of which hit has the highest likelihood of being a
member [31].
Knowledge Base
Knowledge may be as simple as detailing regions of an image where the information of
interest is known to be located, thus limiting the search that has to be conducted in
seeking that information. The knowledge base also can be quite complex, such as an
interrelated list of all major possible defects in a materials inspection problem or an
image database containing high-resolution satellite images of a region in connection
with change-detection applications.
image
sensors
Problem Domain
Digital image processing allows the use of much more complex algorithms for image
processing, and hence, can offer both more sophisticated performances at simple task,
and the implementation of methods which would be impossible by analog means [33].
In particular, digital image processing is the only practical technology for classification,
feature extraction, pattern recognition, projection, multi-scale signal analysis. Some
techniques which are used in digital image processing include pixelization, linear
filtering, principal component analysis Independent component analysis, Hidden
Markov models, anisotropic diffusion, partial differential equations, self-organizing
maps, neural networks and wavelets [34].
MATLAB TOOLS
Literature of Review
This section illustrates few recent satellite image classification methods. J. Shabnam et
al., [11] introduced supervised satellite image classification method to classify very high
resolution satellite images into specific classes using fuzzy logic. This method classifies
satellite images into five major classes: shadow, vegetation, road, building and bare
land. This method uses image segmentation and fuzzy techniques for satellite image
and classifies shadow, vegetation and road. Second level segmentation identifies
buildings. Further it uses contextual check to classify unclassified segments and regions.
Fuzzy techniques are used to improve the classification accuracy at the borders of
water, urban and green land on satellite images. This method takes training set for
every class and computes threshold value using k-means and LDA [13]techniques.
The method extracts low-level features from satellite images and applies k-means
algorithm to group into unlabeled clusters. Meaningful labels are assigned to the
This method illustrates power of ontology in ocean satellite image classification. The
method extracts low level features from ocean satellite images and represent in owl
file format. This owl file is merged with domain ontologies and labeling rules.
Labeling rules, training rules, binary decision tree rules and expert rules are represent
using SWRL [15] language. The method produces classification results of given
ocean satellite image with the support of training, human expert, decision support
and labeling rules. [14] Also provides a tool as plug-in for protg ontology editor. The
tool supports ocean satellite images with the support of domain ontologies. S.
using decision tree technique. This method extracts features from satellite image
based on pixel color and intensity. Extracted features assist to determine objects
reside in the satellite images. The methods classifies satellite images using decision
tree with the support of identified objects. [16] Presents a method for the
classification of satellite images into multiple predefined land cover classes. This
method is automated and uses segment level classification with the support of
in two phases. Phase 1: spectral and textural features are extracted for each pixel to train
split-and-merge algorithm is used to convert the pixel level classification maps into
satellite image. Later meaningful labels are assigned to the clusters. ISO DATA
parameters needs several parameters that control number of clusters and iterations to
be run. In few cases clu sters may contain pixels of different classes. In such
with the K-Means technique are simple to process and fast execution. Limitation with
this method is analyst should know priori number of classes. Support Vector
method. This method can be used to extract land-use map. SVM works on the
assumption that there is no information on how to distribute the overall data. SVM
reduces satellite classification cost, increases speed and improves accuracy. Minimum
distance [20] approach calculates mean spectra of each predefined class and assigns the
pixel to a group that has the least distance to mean. It easy to execute and simple to
process. But minimum distance method considers only mean value. Mahalanobis
distance method [21] is very similar to minimum distance method. It uses statistics
checks pixels of test images and determine class of the pixel. Parallelepiped method is
fast and easy to run, but overlap may produce false results.
3.IMAGE CLASSIFICATION TECHNIQUES
Classification between the objects is easy task for humans but it has proved to be
a complex problem for machines. The raise of high-capacity computers, the availability
of high quality and low-priced video cameras, and the increasing need for automatic
classification system consists of a camera fixed high above the interested zone, where
images are captured and consequently processed. Classification includes image sensors,
image preprocessing, object detection, object segmentation, feature extraction and object
patterns that compares with detected object to classify in to proper category. Image
other characteristics of moving object image obtained from camera. And in extraction,
from the detected object estimating the trajectory of the object in the image plane.
C. Training: Selection of the particular attribute which best describes the pattern.
D. Classification of the object-Object classification step categorizes detected objects
into predefined classes by using suitable method that compares the image patterns with
features used in shape-based classification schemes are the points (centroid, set of
contour.
2) On The Basis Of Training Sample Used:
Unsupervised Classification
unknown pixels and divides it into number of classes based on natural groupings
present in the image values. Computer determines spectrally separable class and then
The parameters like mean vector and covariance matrix are used. There is an
assumption of Gaussian distribution. The parameters like mean vector and covariance
matrix are frequently generated from training samples. There is no assumption about
calculate class separation. Example: Artificial neural network, support vector machine,
spectra of all training-set pixels from a given feature. the contributions of all materials
parametric or nonparametric the accuracy may not meet up because of the impact of the
mixed pixel problem. Example: maximum likelihood, ANN, support vector machine
Subpixel classifiers:
membership of each pixel to each end member. Subpixel classifier has the capability to
handle the mixed pixel problem, suitable for medium and coarse spatial resolution
Per-field classifier:
heterogeneity, and also improves the classification accuracy. Generally used by GIS-
Object-oriented classifiers:
Pixels of the image are united into objects and then classification is performed on
the basis of objects. It involves 2 stages: image segmentation and image classification
Image segmentation unites pixels into objects, and a classification is then implemented
Also known as crisp classification In this each pixel is required or forced to show
likelihood, minimum distance, artificial neural network. This image classification uses
classifier. This classification uses both spectral and spatial information initial
The basic principle of the proposed algorithm is integrating the K-Means algorithm
Step 2: Convert available image from RGB color space to L*a*b* color Space
Step 3: Classify the colors in 'a*b*' space using K-means clustering algorithm
Step 4: Label every pixel of the image using the results from K-means algorithm
Step 7: The Laplacian of Gaussian filter finds edges by looking for zero crossings.
Algorithm: 2
The proposed algorithms consist of two phases. In the first phase, we construct the
clusters using the K-Means algorithm. If k and d are fixed as per the algorithm, the
problem can be exactly solved in time O (ndk+1 log n), where n is the number of entities
to be clustered. In the second phase we find the classified image by the way of filtering
the clusters through a Laplacian and Gaussian filter or through a Prewitt Filter.
Laplacian of Gaussian:
Laplacian. The Laplacian of an image showcase the regions of rapid intensity change.
For this specific property this algorithm is popularly used for edge detection. For
example in zero crossing edge detectors, this algorithm is well placed. We need to pre
format the image in order to reduce its sensitivity to noise. The image is smoothed by
approximating a Gaussian smoothing filter. The kernels are very much sensitive to
noise since they are approximating to a second derivative measurement on the image.
To overcome this, the images are smoothed with Gaussian filter before applying the
Laplacian filter. While smoothing the noise components are removed as much as
possible. This mainly eliminates the high frequency noise components. Gaussian
smoothing filter will be convolved with the Laplacian filter, and then convolve this
hybrid filter with the given image to achieve the required result for the following
advantages. This method requires far fewer arithmetic operations and only one
Prewitt Filter
The Prewitt Filter is another well known edge detection filter uses two 3X 3
kernels. One kernel is used for the changes in the horizontal direction and another
kernel is for the changes in the vertical direction. The below example shows the
In the present study we have used mainly two types of data. These are
topographic map and remote sensing data. The remote sensing data of geo referenced
and merged data of LISS III and PAN of IRS ID of 2013 in the digital mode are obtained
from the National Remote Sensing Agency (NRSA), Government of India, Hyderabad,
and used. The spatial resolutions of LISS III and PAN are 23.5 and 5.8 meters, and
spectral resolutions are 4 and 1 meters, respectively. The topographic map 57 O/6
digital mode using scanning. The topographic map is geo referenced with longitude
and latitudes using the ArcGIS and Matlab software and spatial analyst tools and
demarcated the boundary of study area. A supervised signature extraction with the
maximum likelihood algorithm was employed to classify the digital data of IRS 1D geo
referenced and merged LISS III and PAN for land use/land cover mapping for the year
2003. Before the preprocessing and classification of satellite imagery began, an extensive
field survey was performed throughout the study area using Global Positioning System
(GPS) equipment. This survey was performed in order to obtain accurate location point
data for each land use and land cover class included in the classification scheme as well
equalization in ERDAS Imagine to improve the image quality and to achieve better
from specified locations in the image. These specified locations are given the generic
name training sites and are defined by the user. Generally a vector layer is digitized
over the raster scene. The vector layer consists of various polygons overlaying different
land use types. The training sites will help to develop spectral signatures for the
outlined areas. The land use maps pertaining of two different periods were used for
post classification comparison, which facilitated the estimation of changes in the land
use category and dynamism with the changes. Post classification comparison is the
most commonly used quantitative method of change detection with fairly good results.
involves independently produced spectral classification results from different data sets,
a Intel Core 2 Duo Processor machine with T9400 chipset, 2.53 GHz CPU and 4 GB
RAM running on the platform Microsoft Windows 7. To investigate, we first run the K
Means algorithm and the input image with region of interest is divided into various
clusters.
Original Image
Objects in First cluster
Image processing Field checks Land use/land cover change detection Geo referencing
Land use/land cover map 2003 Final rectified topo sheet IRS imagery 2003 SOI
oposheet, 1976 Figure 2: Flow chart of methodology for land use/land cover and
change detection. The main reason behind the LU/LC changes includes rapid
Vellore is a rapid developing town. During the past few decades, the study area has
and transportation activities have negative impact on the environmental health of the
region. Due to involvement of multiple data sets, we used latest technologies like
remote sensing and GIS to quantify LU/LC. On the basis of interpretation of remote
sensing imagery, field surveys, and existing study area conditions, we have classified
the study area into eight categories, that is, agriculture, built-up area, dense forest,
mining, open forest, other land, plantation, and water spread area. Thestudy area
covers 125 km2 and LU/LC changes were estimated from 1976 to 2014.
Figure : Land use/land cover
The LU/LC changes were of highest amount in agriculture built-up area,
plantation, other land, and dense forest from 1976 to 2003. Comparison of LU/LC in
1976 and 2003 derived from toposheet and satellite imagery interpretation indicates that
the built-up area, comprising human habitation developed for nonagricultural uses like
building, transport, and communications is largely broadened from 5.91 km2 (1976) to
18.34 km2 (2003), with a net addition of 12.44 km2. This is due to urban expansion and
population increase in this study area during the study period. The agricultural lands
which are used for paddy and production of food, vegetables, and other mixed varieties
like mango, coconuts, and other homestead trees are largely decreased from 68.23 km2
(1976) to 21.45 km2 (2003), with net decrease of 46.78km2. Thestudy area witnessed
large amount of agriculture land converted into settlements and other urban
development activities.
Water spread area, both man-made and natural water features such as
rivers/streams, tanks, and reservoirs, also decreased from 12.09 km2 in 1976 to 9.91
km2 in 2003, with net decline of 2.18 km2. Water spread area decrease is occurred due
to the gradual conversion of water spread area into built-up area or human
developmental area as the population increased significantly during the past decades.
Dense forest comprising all land with tree cover of canopy density of 70% and above is
significantly declined from 1976 (22.35 km2) to 2003 (4.25km2) with a net decrease of
18.10 km2.This is attributed to conversion of forest lands into urban areas and other
development activities.
Open forest land comprising all lands with tree cover of canopy density between 10%
and 40% is not found in 1976, whereas there is a significant addition of 10.90 km2 of
The plantation land which includes agricultural tree crops and other horticulture
nurseries also increased from 0.79 km2 (1976) to 21.80 km2 (2003), with a net increase of
21.01 km2. The other land consisting of roads, mostly link roads, joining the village
settlement and barren land with or without scrub and sandy area is largely broadened
from 15.64 km2 (1976) to 38.22 km2 (2003) with a net increase of 22.54 km2. In 1976 no
mining activities were found in the study area, but a small addition of 0.13 km2 mining