Sunteți pe pagina 1din 18

Dept of EEE

K-means Algorithm

Abstract
k-Means is a rather simple but well known algorithms for grouping objects,
clustering. Again all objects need to be represented as a set of numerical
features. In addition the user has to specify the number of groups (referred to
as k) he wishes to identify. Each object can be thought of as being represented
by some feature vector in an n dimensional space, n being the number of all
features used to describe the objects to cluster. The algorithm then randomly
chooses k points in that vector space, these point serve as the initial centers of
the clusters. Afterwards all objects are each assigned to center they are closest
to. Usually the distance measure is chosen by the user and determined by the
1

learning task. After that, for each cluster a new center is computed by
averaging the feature vectors of all objects assigned to it. The process of
assigning objects and recomputing centers is repeated until the process
converges. The algorithm can be proven to converge after a finite number of
iterations. Several tweaks concerning distance measure, initial center choice
and computation of new average centers have been explored, as well as the
estimation of the number of clusters k. Yet the main principle always remains
the same. In this project we will discuss about K-means clustering algorithm,
implementation and its application to the problem of unsupervised learning

Contents
Abstract....1
1. Introduction...3
2. The k-means algorithm..........................................4
3. How the k-mean clustering algorithm works....5
4. Task Formulation..6
4.1 K-means implementation.6
4.2 Estimation of parameters of a Gaussian mixture.8
4.3 Unsupervised learning..9
5. Limitations..13
6. Difficulties with k-means14
7. Available software...15
8. Applications of the k-Means Clustering Algorithm15
9. Conclusion...16
References......17

The k-means Algorithm

1 Introduction

In this project, we describe the k-means algorithm, a straightforward and


widely-used clustering algorithm. Given a set of objects (records), the goal of
clustering or segmentation is to divide these objects into groups or clusters
such that objects within a group tend to be more similar to one another as
compared to objects belonging to different groups. In other words, clustering
algorithms place similar points in the same cluster while placing dissimilar
points in different clusters. Note that, in contrast to supervised tasks such as
regression or classification where there is a notion of a target value or class
label, the objects that form the inputs to a clustering procedure do not come
with an associated target. Therefore clustering is often referred to as
unsupervised learning. Because there is no need for labelled data, unsupervised
algorithms are suitable for many applications where labeled data is difficult to
obtain. Unsupervised tasks such as clustering are also often used to explore and
characterize the dataset before running a supervised learning task. Since
clustering makes no use of class labels, some notion of similarity must be
defined based on the attributes of the objects. The definition of similarity and
the method in which points are clustered differ based on the clustering
algorithm being applied. Thus, different clustering algorithms are suited to
different types of data sets and different purposes. The best clustering
algorithm to use therefore depends on the application. It is not uncommon to
try several different algorithms and choose depending on which is the most
3

useful.
The k-means algorithm is a simple iterative clustering algorithm that
partitions a given dataset into a user-specified number of clusters, k. The
algorithm is simple to implement and run, relatively fast, easy to adapt, and
common in practice. It is historically one of the most important algorithms in
data mining. Historically, k-means in its essential form has been discovered by
several researchers across different disciplines, most notably Lloyd
(1957,1982), Forgey (1965), Friedman and Rubin(1967), and McQueen(1967).
A detailed history of k-means along with descriptions of several variations are
given in Jain and Dubes. Gray and Neuhoff provide a nice historical
background for k-means placed in the larger context of hill-climbing
algorithms.
In the rest of this project, we will describe how k-means works, discuss the
limitations of k-means, difficulties and some applications of this algorithm.

2 The k-means algorithm


The k-means algorithm applies to objects that are represented by points in a
d -dimensional vector space. Thus, it clusters a set of d -dimensional
vectors,
D={ xi i=1,... , N }
x i R d denotes the ith object or data point. As discussed in the

where
introduction, k-means is a clustering algorithm that partitions D into clusters
of points. That is, the k-means algorithm clusters all of the data points in D
such that each point x i falls in one and only one of the k partitions. One
can keep track of which point is in which cluster by assigning each point a
cluster ID. Points with the same cluster ID are in the same cluster, while points
with different cluster IDs are in different clusters. One can denote this with a
cluster membership vector m of length N, where mi is the cluster ID of x i .
The value of is an input to the base algorithm. Typically, the value for is
based on criteria such as prior knowledge of how many clusters actually appear
in D , how many clusters are desired for the current application, or the types
of clusters found by exploring/experimenting with different values of k .
How k is chosen is not necessary for understanding how k-means partitions
the dataset D , and we will discuss how to choose k when it is not prespecified in a later section.
In k-means, each of the k clusters is represented by a single point in
d
R . Let us denote this set of cluster representatives as the set
C={c j j=1,... , k } .
These k cluster representatives are also called the cluster means or cluster
centroids. In clustering algorithms, points are grouped by some notion of
closeness or similarity. In k-means, the default measure of closeness is the
Euclidean distance. In particular, one can readily show that k-means attempts
4

to minimize the following non-negative cost function:


2

argmi n j xi c j 2

(1)

i=1

In other words, k-means attempts to minimize the total squared Euclidean


distance between each point x i and its closest cluster representative c j .
Equation 1 is often referred to as the k-means objective function.

3 How the k-mean clustering algorithm works


Here is step by step k-means clustering algorithm

K -means clustering algorithm flowchart


Step1. Begin with a decision on the value of k =number of clusters
Step2. Put any initial partition that classifies the data into k clusters. You may
assign the training samples randomly, or systematically as the following:
1. Take the first k training sample as single-element clusters.
2. Assign each of the remaining(N-k) training sample to the cluster with
the nearest centroid.
5

After each assignment, recomputed the centroid of the gaining cluster.


Step3. Take each sample in sequence and compute its distance from the
centroid of each of the clusters. If a sample is not currently in the
cluster with the closest centroid, switch this sample to that cluster and
update the centroid of the cluster gaining the new sample and the cluster
losing the sample.
Step4. Repeat step 3 until convergence is achieved, that is until a pass through
the training sample causes no new assignments.

If the number of data is less than the number of cluster then we assign each
data as the centroid of the cluster. Each centroid will have a cluster number. If
the number of data is bigger than the number of cluster, for each data, we
calculate the distance to all centroid and get the minimum distance. This data is
said belong to the cluster that has minimum distance from this data.
Since we are not sure about the location of the centroid, we need to adjust the
centroid location based on the current updated data. Then we assign all the data
to this new centroid. This process is repeated until no data is moving to another
cluster anymore. Mathematically this loop can be proved to be convergent. The
convergence will always occur if the following condition satisfied:
1. Each switch in step 2 the sum of distances from each training sample to that
training samples group centroid is decreased.
2. There are only finitely many partitions of the training examples into k
clusters.

4 Task Formulation
4.1 K-means implementation

To implement the K-means clustering algorithm we have to follow the


description
given
below:
Tasks:
1. Download test data data.mat, display them using ppatterns().
The file data.mat contains single variable X - 2N matrix of 2D
points.

2. Run the algorithm. In each iteration, display locations of means j and


current classification of the test data. To display the classification, use
again the ppatterns function.
3. In each iteration, plot the average distance between points and their
respective closest means j.
4. Experiment with diferent number K of means, eg. K = 2,3,4. Execute
the algorithm repeatedly, initialise the mean values j with random
positions. Use the function rand.

4.2 Estimation of parameters of a Gaussian mixture

Let us assume that the distribution of our data is a mixture of three


gaussians:

3 j=1 P( j) N (x j , j)
p( x)=

where N(j ,j) denotes a normal distribution with mean value j and
covariance j. P(j) denotes the weight of j-th gaussian within the mixture.
The task is, for given input data x1 , x2 , ... , xN, to estimate the mixture
parameters j ,
j,
P(j) .
Tasks:
1. In each iteration of the implemented k-means algorithm, reestimate
means j and covariances j using the maximal likelihood
method. P(j) will be the relative number (percentage) of data points
classified to j-th cluster.
2. In each iteration, plot
parameters j , j , P(j) :

the

total

likelihood L of

estimated

4.3 Unsupervised learning

Apply the K-means clustering to the problem of "unsupervised learning".


The input consists of images of three letters, H, L, T. It is not known, which
letter is shown in which images. The task is to classify the images into three
classes. The images will be described by the two usual measurements:
x = (sum of pixel intensities in the left half of the image) - (sum of pixel
intensities
in
the right half
of
the
image)
y = (sum of pixel intensities in the upper half of the image) - (sum of pixel
intensities in the lower half of the image)
Tasks:
1. Download the images
measurements x and y.

of

letters image_data.mat,

compute

2. Using the k-means method, classify the images into three classes. In
each iteration, display the means j , current classification, and the
likelihood L .
3. After the iteration stops, compute and display the average image of
each of the three classes. To display the final classification, you can
use show_class function.
9

10

Visualisation of classification by show_class:


Class 1
11

Class 2

Class 3
12

5 Limitations
The greedy-descent nature of k-means on a non-convex cost implies that the
convergence is only to a local optimum, and indeed the algorithm is typically
quite sensitive to the initial centroid locations. In other words, initializing the
set of cluster representatives C differently can lead to very different clusters,
even on the same dataset D . A poor initialization can lead to very poor
clusters.
The local minima problem can be countered to some extent by running the
algorithm multiple times with different initial centroids and then selecting the
best result, or by doing limited local search about the converged solution. Other
approaches include methods that attempts to keep k-means from converging to
local minima. There are also a list of different methods of initialization, as well
as a discussion of other limitations of k-means.
As mentioned, choosing the optimal value of k may be difficult. If one has
knowledge about the dataset, such as the number of partitions that naturally
comprise the dataset, then that knowledge can be used to choose k .
Otherwise, one must use some other criteria to choose k , thus solving the
model selection problem. One naive solution is to try several different values of
k and choose the clustering which minimizes the k-means objective function
(Equation 1). Unfortunately, the value of the objective function is not as
informative as one would hope in this case. For example, the cost of the
optimal solution decreases with increasing k till it hits zero when the
number of clusters equals the number of distinct data points. This makes it
more difficult to use the objective function to
(a) directly compare solutions with different numbers of clusters and
13

(b) to find the optimum value of k .


Thus, if the desired k is not known in advance, one will typically run kmeans with different values of k , and then use some other, more suitable
criterion to select one of the results. For example, SAS uses the cubeclustering-criterion, while X-means adds a complexity term (which increases
with k ) to the original cost function (Eq. 1) and then identifies the k which
minimizes this adjusted cost. Alternatively, one can progressively increase the
number of clusters, in conjunction with a suitable stopping criterion. Bisecting
k-means achieves this by first putting all the data into a single cluster, and then
recursively splitting the least compact cluster into two using 2-means. The
celebrated LBG algorithm used for vector quantization doubles the number of
clusters till a suitable code-book size is obtained. Both these approaches thus
alleviate the need to know k beforehand.

6 Difficulties with k-means


k-means suffers from several other problems that can be understood by first
noting that the problem of fitting data using a mixture of k Gaussians with
identical, isotropic covariance matrices, ( 2 I ) where I is the identity
matrix, results in a soft version of k-means.
More precisely, if the soft assignments of data points to the mixture
components of such a model are instead hardened so that each data point is
solely allocated to the most likely component, then one obtains the k-means
algorithm. From this connection it is evident that k-means inherently assumes
that the dataset is composed of a mixture of k balls or hyperspheres of data, and
each of the k clusters corresponds to one of the mixture components. Because
of this implicit assumption, k-means will falter whenever the data is not well
described by a superposition of reasonably separated spherical Gaussian
distributions. For example, k-means will have trouble if there are non-convex
shaped clusters in the data. This problem may be alleviated by rescaling the
data to whiten it before clustering, or by using a different distance measure
that is more appropriate for the dataset. For example, information-theoretic
clustering uses the KL-divergence to measure the distance between two data
points representing two discrete probability distributions. It has been recently
shown that if one measures distance by selecting any member of a very large
class of divergences called Bregman divergences during the assignment step
and makes no other changes, the essential properties of k-means, including
guaranteed convergence, linear separation boundaries and scalability, are
retained. This result makes k-means effective for a much larger class of
datasets so long as an appropriate divergence is used.
Another method of dealing with non-convex clusters is by pairing k-means
with another algorithm. For example, one can first cluster the data into a large
number of groups using k-means. These groups are then agglomerated into
larger clusters using single link hierarchical clustering, which can detect
complex shapes. This approach also makes the solution less sensitive to
14

initialization, and since the hierarchical method provides results at multiple


resolutions, one does not need to worry about choosing an exact value for k
either; instead, one can simply use a large value for k when creating the initial
clusters.
The algorithm is also sensitive to the presence of outliers, since mean is not a
robust statistic. A preprocessing step to remove outliers can be helpful. Postprocessing the results, for example to eliminate small clusters, or to merge
close clusters into a large cluster, is also desirable.
Another potential issue is the problem of empty clusters . When running kmeans, particularly with large values of k and/or when data resides in very high
dimensional space, it is possible that at some point of execution, there exists a
cluster representative cj such that all points xj in D are closer to some other
cluster representative that is not cj . When points in D are assigned to their
closest cluster, the jth cluster will have zero points assigned to it. That is, cluster
j is now an empty cluster. The standard algorithm does not guard against empty
clusters, but simple extensions (such as reinitializing the cluster representative
of the empty cluster or stealing some points from the largest cluster) are
possible.

7Available software
Because of the k-means algorithms simplicity, effectiveness, and historical
importance, software to run the k-means algorithm is readily available in
several forms. It is a standard feature in many popular data mining software
packages. For example, it can be found in Weka or in SAS under the
FASTCLUS procedure. It is also commonly included as add-ons to existing
software. For example, several implementations of k-means are available as
parts of various toolboxes in Matlab. k-means is also available in Microsoft
Excel after adding XL Miner. Finally, several stand-alone versions of kmeans exist and can be easily found on the Internet.The algorithm is also
straightforward to code, and the reader is encouraged to create their own
implementation of k-means as an exercise.

8 Applications of the k-Means Clustering Algorithm


Briefly,

optical

character

recognition,

speech

recognition,

and

encoding/decoding as example applications of k-means. However, a survey of


the literature on the subject offers a more in depth treatment of some other
practical applications, such as "data detection for burst-mode optical
receiver[s]", and recognition of musical genres. Researchers describe "burstmode data-transmission systems," a "significant feature of burst-mode data
transmissions is that due to unequal distances between" sender and receivers,
"signal attenuation is not the same" for all receivers.

Because of this,

"conventional receivers are not suitable for burst-mode data transmissions."


15

The importance, they note, is that many "high-speed optical multi-access


network applications, [such as] optical bus networks [and] WDMA optical star
networks" can use burst-mode receivers.
In their paper, they provide a "new, efficient burst-mode signal detection
scheme" that utilizes "a two-step data clustering method based on a K-means
algorithm."

They go on to explain that "the burst-mode signal detection

problem" can be expressed as a "binary hypothesis," determining if a bit is 0 or


1. Further, although they could use maximum likelihood sequence estimation
(MLSE) to determine the class, it "is very computationally complex, and not
suitable for high-speed burst-mode data transmission."

Thus, they use an

approach based on k-means to solve the practical problem where simple MLSE
is not enough.

9 Conclusion
This project tried to explain about K-means clustering algorithm and its
application to the problem of un supervised learning. The k-means algorithm is
a simple iterative clustering algorithm that partitions a dataset into k clusters.
At its core, the algorithm works by iterating over two steps:
1) clustering all points in the dataset based on the distance between each
point and its closest cluster representative, and
2) re-estimating the cluster representatives.
Limitations of the k-means algorithm include the sensitivity of k-means to
initialization and determining the value of k. Despite its drawbacks, k-means
remains the most widely used partitional clustering algorithm in practice. The
algorithm is simple, easily understandable and reasonably scalable, and can be
easily modified to deal with different scenarios such as semi-supervised
learning or streaming data. Continual improvements and generalizations of the
basic algorithm have ensured its continued relevance and gradually increased
its effectiveness as well.

References
16

1. http://www.ideal.ece.utexas.edu/papers/km.pdf
2. http://www.science.uva.nl/research/ias/alumni/m.sc.theses/theses/NoahLaith.doc
3. http://cw.felk.cvut.cz/cmp/courses/ae4b33rpz/Labs/kmeans/index_en.html

Matlab codes for unsupervised learning task


clear;close all;
load('data.mat'); % X = 2x140
%% cast 1
model = kminovec(X, 4, 10, 1);
%% cast 2
% clear
Gmodel.Mean = [-2,1;1,1;0,-1]';
Gmodel.Cov (:, :, 1) = [ 0.1 0; 0 0.1];
Gmodel.Cov (:, :, 2) = [ 0.3 0; 0 0.3];
Gmodel.Cov (:, :, 3) = [ 0.01 0; 0 0.5];
Gmodel.Prior = [0.4;0.4;0.2];
gmm = gmmsamp(Gmodel, 100);
figure(gcf);clf;
17

ppatterns(gmm.X, gmm.y);
axis([-3 3 -3 3]);
model = kminovec(gmm.X, 3, 10, 1, gmm);
figure(gcf);plot(model.L);
%% cast 3
data = load('image_data.mat');
for i = 1:size(data.images, 3)
% soucet sum leva - prava cast obrazku
pX(i) = sum(sum(data.images(:, 1:floor(end/2) , i)))
- sum(sum(data.images(:, (floor(end/2)+1):end ,
% soucet sum horni - dolni
pY(i) = sum(sum(data.images(1:floor(end/2),: , i)))
- sum(sum(data.images((floor(end/2)+1):end , :,
end

...
i)));
...
i)));

model = kminovec([pX;pY], 3, 10, 1);


show_class(data.images, model.class');
%% d
model = struct('Mean',[-2 3; 5 8],'Cov',[1 0.5],'Prior',[0.4
0.6;0]);
figure; hold on;
plot([-4:0.1:5], pdfgmm([-4:0.1:5],model),'r');
sample = gmmsamp(model,500);
[Y,X] = hist(sample.X,10);
bar(X,Y/500);

18

S-ar putea să vă placă și