Sunteți pe pagina 1din 5

2015 ICEO&SI and ICLEI Resilience Forum , JUNE 28-30, Kaohsiung.

Taiwan
PAPER No.

Assessment of the Grey-Level Co-Occurrence Matrix for


Land Cover Classification using Multi-spectral UAV
image
Thanh Tung Do1, Tien Yin Chou2
1

Master in Urban Planning and Spatial Information, Feng Chia University, 100 Wenhwa Rd, Situn Dist., Taichung 40724,
Taiwan R.O.C, st_tung@gis.tw
2
GIS Research Center, Feng Chia University, 100 Wenhwa Rd, Situn Dist., Taichung 40724, Taiwan R.O.C,
jimmy@gis.tw

ABSTRACT
Texture features based on the grey-level co-occurrence matrix method are extracted from an UAV near-infrared image by
using four second-order statistic, eight window sizes, and two quantization levels. The four UAV multi-spectral bands are
combined with each textural band individually and with all four textural bands. From these combination, a supervised
classification method based on the maximum-likelihood algorithm is chosen to classify the land cover into five classes. The
classification accuracy is measured by kappa coefficients calculated from confusion matrices. The results show that the
addition of texture features to the spectral image provides a significant improvement in the classification accuracy of each
land cover type as compared with the classification obtained from the spectral image only.
I.

INTRODUCTION

The application of Unmanned Aerial Vehicle (UAV) has


increased considerably in recent years due to their greater
availability and the miniaturization of sensors, GPS, inertial
measurement units, and the other [7]. The advantages of
UAV compared to manned aircraft systems are that UAV
can be used in high risk situation without endangering a
human life and inaccessible areas, at low altitude and at
flight profiles close to the objects where manned systems
cannot be flown [1]. Furthermore, in cloudy and drizzly
weather condition, the data acquisition with UAV is still
possible, when the distance to the object permits flying
below the clouds. Moreover, supplementary advantages are
the real-time capability and the ability for fast data
acquisition, while transmitting the image, video and data in
real time to the ground station.
With very high spatial resolution (0.14 x 0.14m) and
multispectral bands (R-G-B-NIR), the level of detail present
in the UAV image has increased considerably when
compared to the other multispectral satellite images. For
visual interpretation, a finer spatial resolution permits better
land cover discrimination. However, the increased amount
of detail creates new problems for information extraction
using automated classification techniques [3]. The finer
spatial resolution increases the spectral-radiometric
variation of land cover types.
There are two major approaches to tackle that problems
in relation to the increased internal variance. The first is
applying mathematical transformation to the original
spectral data to remove the excess spectral information. The
second approach considers the internal spectral variance of
classes as a valuable information that can be used as an
additional information in characterizing and identifying land
covers.
Spectral, textural and contextual features are three
fundamental pattern elements used in human interpretation

of color photographs. Spectral features describe the average


tonal variation in various bands of the visible and/or infrared
portion of an electromagnetic spectrum, whereas textural
contain information about the spatial distribution of tonal
variations within a band. Contextual features contain
information derived from blocks of image data surrounding
the area being analyzed. When small image areas from black
and white images are independently processed by a
machine, then textures are most important [2].
Texture is an important characteristic for the analysis of
many types of images. It presents the first level of spatial
properties that can be extracted from an image. It can define
as the relationships between grey levels in neighboring
pixels which contribute to the entire appearance of the
image. In statistical texture analysis, texture features are
extracted from the statistical distribution of observed
combination of intensities at specified positions relative to
each other in the image. According to the number of
intensity points (pixels) in each combination, statistics are
classified into first-order, second-order and higher-order
statistics. The Grey Level Co-occurrence Matrix (GLCM) is
one of the most popular methods to extract second order
statistical texture features. Third and higher order textures
consider the relationships among three or more pixels.
These are theoretically possible but not commonly
implemented due to calculation time and interpretation
difficulty.
The GLCM contains the relative frequencies with which
two neighboring pixels occur on the image, one with grey
level i, and the other with grey level j. Several statistical
measures, such as contrast, entropy, and angular second
moment can be estimated from the GLCM to describe
specific textural features of the image [2]. Each textural
feature can be used to create a new texture image/band
which can combine with original spectral feature/band for
classification.

2015 ICEO&SI and ICLEI Resilience Forum , JUNE 28-30, Kaohsiung. Taiwan
PAPER No.

When classifying the regions of an image by using


GLCM method, there are several factors to consider: the
spectral band, the quantization level of the image, the
moving window size, the distance and angle for cooccurrence computation and the statistics used as texture
measures.
In this study, five land cover types are classified from
original multispectral UAV image combined with its textural
feature/bands to evaluate the influence of texture features
based on GLCM method on the classification accuracy.
Thus, the major objectives of this study are: i) to evaluate
the influence of the window size, the quantization level, and
the statistics used as texture measures on classification
accuracy; ii) to measure the influence of the window size
and the quantization level on extracting the texture features.
II.

METHODOLOGY

The study site is an area located in Zhuoshui River side,


in the Yunlin County, Taiwan (Fig. 1). The UAV image is
acquired in July 2013. This area is a rural area with most of
land cover types related to vegetation and agricultural field.

pattern dimension on the UAV image and to assess the


influence of window sizes on classification accuracy.
During the process of co-occurrence matrix
computation, the distance between pixels was kept constant
at one. Based on the assumption that no land cover type
exhibit a preferential texture directionality, the cooccurrence matrix over four main angles (0 o, 45o, 90o and
135o) was averaged. Four second-order statistics were
calculated from co-occurrence matrix including: the contrast
(CON), the angular second moment (ASM), the correlation
(COR), and the entropy (ENT) ((1) to (4)).
Ng Ng

ASM P i, j 2
i 1 j 1
Ng Ng

CON i j 2 P i, j
i 1 j 1

Ng Ng

COR

(1)
(2)

(ij) P i, j x y
i 1 j 1

x y
Ng Ng

(3)

ENT P i, j log P i, j

i 1 j 1
(4)
P

i
,
j

i,
j th entry in a normalized grey-level
where
is
Ng is number of distinct grey-level
co-occurrence matrix;

in quantized image; x , y , x , x are the means and

P
standard deviation of Px and y ; Px (i ) is ith entry the
marginal probability matrix obtained by summing the rows
of P i, j [2].
Figure 1.

UAV true-color image of study site

A. Texture bands extraction and band combinations


Based on GLCM method, sixty-four texture bands (Fig.
2) were created by using the original UAV near-infrared
band at a spatial resolution of 0.14 x 0.14m. This spectral
band exhibits better contrast between land cover types than
the visible spectral band (R-G-B band).

Figure 2.

Creation of texture band and band combination

The quantization level of 16 and 32 were chosen for


texture bands creation. Eight window sizes from 3 x 3 pixels
to 41 x 41 pixels also were chosen for testing. This selection
permits to cover some range of the land category spatial

The texture images/bands were normalized on a 256


grey-level scale using a linear transformation [6].
B. Classification accuracy assessment
The four UAV multispectral bands were combined with
each texture image individually and all four texture images
together (Fig. 1). These combinations are classified by using
a supervised classification method based on the maximum
likelihood algorithm. These classifications were repeated for
each window size and quantization. The classification from
original UAV multispectral image also was done for
assessing the contribution of texture features in the
discrimination of land cover types. The land cover
classification scheme is including: bare soil, dense
vegetation, agriculture, grassland, and residential areas.
Most of the classes are following the USGS scheme and
emphasizing the pattern and spatial variability of the image
[8].
The sample (signatures collected from the image) areas
were selected as the training sites by using the on-screen
digitized features. A total of 250 randomly sample (50 for
each cover type) were chosen by using another kind of RGB
UAV image at a spatial resolution of 0.06 x 0.06m as
reference image. These areas were systematically and
proportionally selected throughout the whole image.

2015 ICEO&SI and ICLEI Resilience Forum , JUNE 28-30, Kaohsiung. Taiwan
PAPER No.

To measure the classification accuracy, the Kappa


Coefficient was calculated from confusion matrices. This
coefficient can measure the agreement between estimated
land cover classification and reality land cover or to
determine if the values contained in an error matrix
represent a result significantly better than random [5].
Kappa coefficient is computed as:
r

i 1

xii xi xi
N2

D. Influence of window size and texture feature on


classification accuracy
The window size is a very important factor that is
responsible for most of the variation in the image
classification process. To evaluate the influence of window
size on classification accuracy, the means (from two
quantization levels) of five kappa coefficients obtained for
each cover type were calculated for eight window sizes (Fig.
3).

i 1

xi xi
i 1

(5)
where N is the total number of site in the matrix, r is the
number of row in the matrix, xii is the number in row i and
column i, x i is the total for row i, and xi is the total for
column i [5].
To calculate the agreement between classified and the
reference data for an individual class, the conditional Khat
coefficient was calculated.
N xii xi x i

i
N xi xi x i

Figure 3.

Mean kappa coefficient of each land cover type

Figure 4.

Discrimination between agriculture and bare soil


at window size of 25 x 25 pixel

(6)

where xii is the number of observations correctly classified


for a particular category, and N is the total number of
observations in the entire error matrix.
III.

RESULT AND DISCUSSION

C. Classification accuracy improvement by adding texture


bands
The classification accuracy calculated from original
multispectral UAV image for each land cover type is low,
especially for the bare soil areas (Table I). The results show
that the classification accuracy is considerably improved
when the texture features is added to the original spectral
image. The most significant improvement is in the
classification of grassland (from 69 to 97%), following by
bare soil (from 47 to 73%), agriculture (from 70 to 84%),
and residential areas (from 82 to 97%).
Table I.
Cover types
Bare soil
Grassland
Dense Vegetation
Agriculture
Residential

Classification accuracy comparison

Original
UAV
0.47
0.69
0.78
0.70
0.82

Texture*
feature
8 bds-41/16
ENT-25/32
8bds-25/32
ASM-13/32
8bds-33/32

UAV & texture


0.73
0.97
0.87
0.84
0.97

*
8bds=8 bands (4 spectral bands, 4 texture bands), first number is window size, second
number is quantization level

The texture combination that provides the best


classification accuracy change greatly from one cover type
to another. For bare soil cover type, the combination of 4
spectral bands and 4 texture bands provides the highest
classification accuracy. In other hands, the combination
between 4 spectral bands and one second-order statistic
CON provides the best classification accuracy for grassland
cover type.

Figure 5.
vegetation

Discrimination between grassland and dense


at window size of 25 x 25 pixel

The results show that the classification accuracy is


changing from one window size to the others. It seems to
exist a window size that maximizes the classification
accuracy for each land cover type. The window size of 25 x
25 pixels can be seen as the most suitable to obtain accurate
classification results for more than one land cover type. The
smaller window sizes do not show the satisfactory results.
These window sizes may not capture the pattern of most
classes. The improvement of the discrimination between
each cover type by adding a texture feature to the spectral
band can be described through the statistics of training data.
The discrimination between agriculture field and bare soil,
grassland and dense vegetation area, grassland and

2015 ICEO&SI and ICLEI Resilience Forum , JUNE 28-30, Kaohsiung. Taiwan
PAPER No.

residential areas are extracted (Fig. 4, 5, and 6). The


statistical separability is very low when using the
multispectral bands alone. It is significantly improved when
the texture features added to the original spectral images.
The class separability increases because the unique texture
pattern characterizes each class. In case of several cover
types, the signatures are still overlapping. However, by
using multi-window size of texture features, the separabilty
is improved when compared with the results obtained from
the spectral bands only. The ENT and ASM provide good
separabiltiy between agriculture, bare soil and grass land,
while the ENT at window size of 41 x 41 pixels provide
good separability between grassland and residential areas.

histogram of four texture bands at eight window size and


two quantization level was extracted (Fig. 7 and 8). The
observations show that the trend of texture features
extracted from 16 and 32 quantization levels are almost
similar and contain basically the same information. The
ENT and CON values increase progressively with an
increase in the window size, whereas the ASM and COR
values decrease with the increasing of window size. The
variations of all four texture images are significant from
window size of 3 x 3 pixels to 13 x 13 pixels, but do not
vary much over the other window sizes. The ENT image has
the smallest variation, while the COR image has the highest.
IV.

CONCLUSION

In this study, the textural approach based on GLCM


method is using to obtain a significant improvement in land
cover classification from multispectral UAV image. The
classification accuracy is influenced by all three factor:
window size, statistics and quantization level. The
classification accuracy is considerably improved when the
texture features is added to the original spectral image.

Figure 6.
area

Discrimination between grassland and residential


at window size of 41 x 41 pixel

In further work, its required to evaluate the influence of


variables directly associated with GLCM method such as
inter-pixel angle and inter-pixel distance on characterizing a
particular cover type from an UAV image. Its also
important to extract texture features from more window size
and second order statistics for identifying the best
combinations of spectral and textural image to maximize the
classification accuracy.
ACKNOWLEDGMENT
We gratefully acknowledge the funding support and data
support from GIS Research Center, Feng Chia University,
Taiwan.
REFERENCES
[1]

Figure 7.

Mean grey level values of texture images at


quantization level of 16
[2]

[3]

[4]

[5]
[6]
Figure 8.

Mean grey level values of texture images at


quantization level of 32

E. Influence of quantization level and window size on the


texture features extraction
To evaluate the relationship between window size,
quantization level and the creation of texture features, the

[7]

[8]

A. Rango, S. Laliberte, C.Steele, E. Herrick, B. Bestemeyer, T.


Schmugge, A. Roanhorse, and V. Jenkins, Using unmanned aerial
vehicles for rangelands: Current applications and future potentials,
Environment Practice, 8:159-168, 2006.
M. Haralick, K.Shanmugam and Itshak Dinsten, Textural feature
for image classification, IEEE Transactions on System, Man, and
Cybernetics, Vol. SMC-3, No. 6, November 1973.
J. Marceau, J. Howarth, M. Dubois and J. Gratton, Evaluation of the
Grey-Level Co-Occurrence Matrix method for land-cover
classification using SPOT imagery, IEEE Transactions on
Geoscience and Remote Sensing, Vol. 28, No. 4, July 1990.
P. Mohanaiah, P. Sathyanarayana, L. GuruKumar, Image texture
feature extraction using GLCM approach, International Journal of
Scientific and Research Publications, Vol. 3, Issue 5, May 2013.
R. Jensen, Introductory digital image processing, Prentice Hall, 3 rd
edition, May 10 2004.
R. Wang, Advanced methods in grey level segmentation,
http://fourier.eng.hmc.edu/e161/lectures/digital_image/node9.html,
December 2004.
S. Laliberte, E. Herrick, A. Rango and C. Winters, Acquisition, or
thorectification, and object-based classification of Unmanned Aerial
Vehicle (UAV) imagery for rangeland monitoring, Photogrammetric
Engineering & Remote Sensing, Vol. 76, No. 6, June 2010, pp. 661672.
USGS
Land
Cover
Institute
(LCI):
http://landcover.usgs.gov/classes.php, December 2012.

S-ar putea să vă placă și