0 Voturi pozitive0 Voturi negative

7 (de) vizualizări18 paginik-Means is a rather simple but well known algorithms for grouping objects, clustering. Again all objects need to be represented as a set of numerical features. In addition the user has to specify the number of groups (referred to as k) he wishes to identify. Each object can be thought of as being represented by some feature vector in an n dimensional space, n being the number of all features used to describe the objects to cluster. The algorithm then randomly chooses k points in that vector space, these point serve as the initial centers of the clusters. Afterwards all objects are each assigned to center they are closest to. Usually the distance measure is chosen by the user and determined by the learning task. After that, for each cluster a new center is computed by averaging the feature vectors of all objects assigned to it. The process of assigning objects and recomputing centers is repeated until the process converges. The algorithm can be proven to converge after a finite number of iterations. Several tweaks concerning distance measure, initial center choice and computation of new average centers have been explored, as well as the estimation of the number of clusters k. Yet the main principle always remains the same. In this project we will discuss about K-means clustering algorithm, implementation and its application to the problem of unsupervised learning

Apr 23, 2016

© © All Rights Reserved

DOCX, PDF, TXT sau citiți online pe Scribd

k-Means is a rather simple but well known algorithms for grouping objects, clustering. Again all objects need to be represented as a set of numerical features. In addition the user has to specify the number of groups (referred to as k) he wishes to identify. Each object can be thought of as being represented by some feature vector in an n dimensional space, n being the number of all features used to describe the objects to cluster. The algorithm then randomly chooses k points in that vector space, these point serve as the initial centers of the clusters. Afterwards all objects are each assigned to center they are closest to. Usually the distance measure is chosen by the user and determined by the learning task. After that, for each cluster a new center is computed by averaging the feature vectors of all objects assigned to it. The process of assigning objects and recomputing centers is repeated until the process converges. The algorithm can be proven to converge after a finite number of iterations. Several tweaks concerning distance measure, initial center choice and computation of new average centers have been explored, as well as the estimation of the number of clusters k. Yet the main principle always remains the same. In this project we will discuss about K-means clustering algorithm, implementation and its application to the problem of unsupervised learning

© All Rights Reserved

7 (de) vizualizări

k-Means is a rather simple but well known algorithms for grouping objects, clustering. Again all objects need to be represented as a set of numerical features. In addition the user has to specify the number of groups (referred to as k) he wishes to identify. Each object can be thought of as being represented by some feature vector in an n dimensional space, n being the number of all features used to describe the objects to cluster. The algorithm then randomly chooses k points in that vector space, these point serve as the initial centers of the clusters. Afterwards all objects are each assigned to center they are closest to. Usually the distance measure is chosen by the user and determined by the learning task. After that, for each cluster a new center is computed by averaging the feature vectors of all objects assigned to it. The process of assigning objects and recomputing centers is repeated until the process converges. The algorithm can be proven to converge after a finite number of iterations. Several tweaks concerning distance measure, initial center choice and computation of new average centers have been explored, as well as the estimation of the number of clusters k. Yet the main principle always remains the same. In this project we will discuss about K-means clustering algorithm, implementation and its application to the problem of unsupervised learning

© All Rights Reserved

- Automated Marketing Research Using Online Customer Reviews
- Towards fact-based cluster policies (Eng)/ Hacia políticas cluster basadas en hechos (Ing)/ Gertaeretan oinarritutako kluster politiketarantza (Ing)
- Lx 3520322036
- Combining Clustering Algorithms for Provide Marketing Policy in Electronic Stores
- knn
- App Store Cluster Analysis
- K Mean Notes
- Efficient Classification of Data Using Decision Tree
- 1-s2.0-S0022460X85703914-main
- Extended Pso Algorithm For
- ICCII-2018_paper_41
- Omnipresence of Cluster Analysis for Optimal Solution
- uo
- Cluster Analisys
- IRJET-V3I602.pdf
- Lecture 3. Partitioning-Based Clustering Methods
- CLUSTERING HYPERSPECTRAL DATA
- 1-s2.0-S0020025517306266-main
- DM Clustering
- IIJCS-2018-07-20-6

Sunteți pe pagina 1din 18

K-means Algorithm

Abstract

k-Means is a rather simple but well known algorithms for grouping objects,

clustering. Again all objects need to be represented as a set of numerical

features. In addition the user has to specify the number of groups (referred to

as k) he wishes to identify. Each object can be thought of as being represented

by some feature vector in an n dimensional space, n being the number of all

features used to describe the objects to cluster. The algorithm then randomly

chooses k points in that vector space, these point serve as the initial centers of

the clusters. Afterwards all objects are each assigned to center they are closest

to. Usually the distance measure is chosen by the user and determined by the

1

learning task. After that, for each cluster a new center is computed by

averaging the feature vectors of all objects assigned to it. The process of

assigning objects and recomputing centers is repeated until the process

converges. The algorithm can be proven to converge after a finite number of

iterations. Several tweaks concerning distance measure, initial center choice

and computation of new average centers have been explored, as well as the

estimation of the number of clusters k. Yet the main principle always remains

the same. In this project we will discuss about K-means clustering algorithm,

implementation and its application to the problem of unsupervised learning

Contents

Abstract....1

1. Introduction...3

2. The k-means algorithm..........................................4

3. How the k-mean clustering algorithm works....5

4. Task Formulation..6

4.1 K-means implementation.6

4.2 Estimation of parameters of a Gaussian mixture.8

4.3 Unsupervised learning..9

5. Limitations..13

6. Difficulties with k-means14

7. Available software...15

8. Applications of the k-Means Clustering Algorithm15

9. Conclusion...16

References......17

1 Introduction

widely-used clustering algorithm. Given a set of objects (records), the goal of

clustering or segmentation is to divide these objects into groups or clusters

such that objects within a group tend to be more similar to one another as

compared to objects belonging to different groups. In other words, clustering

algorithms place similar points in the same cluster while placing dissimilar

points in different clusters. Note that, in contrast to supervised tasks such as

regression or classification where there is a notion of a target value or class

label, the objects that form the inputs to a clustering procedure do not come

with an associated target. Therefore clustering is often referred to as

unsupervised learning. Because there is no need for labelled data, unsupervised

algorithms are suitable for many applications where labeled data is difficult to

obtain. Unsupervised tasks such as clustering are also often used to explore and

characterize the dataset before running a supervised learning task. Since

clustering makes no use of class labels, some notion of similarity must be

defined based on the attributes of the objects. The definition of similarity and

the method in which points are clustered differ based on the clustering

algorithm being applied. Thus, different clustering algorithms are suited to

different types of data sets and different purposes. The best clustering

algorithm to use therefore depends on the application. It is not uncommon to

try several different algorithms and choose depending on which is the most

3

useful.

The k-means algorithm is a simple iterative clustering algorithm that

partitions a given dataset into a user-specified number of clusters, k. The

algorithm is simple to implement and run, relatively fast, easy to adapt, and

common in practice. It is historically one of the most important algorithms in

data mining. Historically, k-means in its essential form has been discovered by

several researchers across different disciplines, most notably Lloyd

(1957,1982), Forgey (1965), Friedman and Rubin(1967), and McQueen(1967).

A detailed history of k-means along with descriptions of several variations are

given in Jain and Dubes. Gray and Neuhoff provide a nice historical

background for k-means placed in the larger context of hill-climbing

algorithms.

In the rest of this project, we will describe how k-means works, discuss the

limitations of k-means, difficulties and some applications of this algorithm.

The k-means algorithm applies to objects that are represented by points in a

d -dimensional vector space. Thus, it clusters a set of d -dimensional

vectors,

D={ xi i=1,... , N }

x i R d denotes the ith object or data point. As discussed in the

where

introduction, k-means is a clustering algorithm that partitions D into clusters

of points. That is, the k-means algorithm clusters all of the data points in D

such that each point x i falls in one and only one of the k partitions. One

can keep track of which point is in which cluster by assigning each point a

cluster ID. Points with the same cluster ID are in the same cluster, while points

with different cluster IDs are in different clusters. One can denote this with a

cluster membership vector m of length N, where mi is the cluster ID of x i .

The value of is an input to the base algorithm. Typically, the value for is

based on criteria such as prior knowledge of how many clusters actually appear

in D , how many clusters are desired for the current application, or the types

of clusters found by exploring/experimenting with different values of k .

How k is chosen is not necessary for understanding how k-means partitions

the dataset D , and we will discuss how to choose k when it is not prespecified in a later section.

In k-means, each of the k clusters is represented by a single point in

d

R . Let us denote this set of cluster representatives as the set

C={c j j=1,... , k } .

These k cluster representatives are also called the cluster means or cluster

centroids. In clustering algorithms, points are grouped by some notion of

closeness or similarity. In k-means, the default measure of closeness is the

Euclidean distance. In particular, one can readily show that k-means attempts

4

2

argmi n j xi c j 2

(1)

i=1

distance between each point x i and its closest cluster representative c j .

Equation 1 is often referred to as the k-means objective function.

Here is step by step k-means clustering algorithm

Step1. Begin with a decision on the value of k =number of clusters

Step2. Put any initial partition that classifies the data into k clusters. You may

assign the training samples randomly, or systematically as the following:

1. Take the first k training sample as single-element clusters.

2. Assign each of the remaining(N-k) training sample to the cluster with

the nearest centroid.

5

Step3. Take each sample in sequence and compute its distance from the

centroid of each of the clusters. If a sample is not currently in the

cluster with the closest centroid, switch this sample to that cluster and

update the centroid of the cluster gaining the new sample and the cluster

losing the sample.

Step4. Repeat step 3 until convergence is achieved, that is until a pass through

the training sample causes no new assignments.

If the number of data is less than the number of cluster then we assign each

data as the centroid of the cluster. Each centroid will have a cluster number. If

the number of data is bigger than the number of cluster, for each data, we

calculate the distance to all centroid and get the minimum distance. This data is

said belong to the cluster that has minimum distance from this data.

Since we are not sure about the location of the centroid, we need to adjust the

centroid location based on the current updated data. Then we assign all the data

to this new centroid. This process is repeated until no data is moving to another

cluster anymore. Mathematically this loop can be proved to be convergent. The

convergence will always occur if the following condition satisfied:

1. Each switch in step 2 the sum of distances from each training sample to that

training samples group centroid is decreased.

2. There are only finitely many partitions of the training examples into k

clusters.

4 Task Formulation

4.1 K-means implementation

description

given

below:

Tasks:

1. Download test data data.mat, display them using ppatterns().

The file data.mat contains single variable X - 2N matrix of 2D

points.

current classification of the test data. To display the classification, use

again the ppatterns function.

3. In each iteration, plot the average distance between points and their

respective closest means j.

4. Experiment with diferent number K of means, eg. K = 2,3,4. Execute

the algorithm repeatedly, initialise the mean values j with random

positions. Use the function rand.

gaussians:

3 j=1 P( j) N (x j , j)

p( x)=

where N(j ,j) denotes a normal distribution with mean value j and

covariance j. P(j) denotes the weight of j-th gaussian within the mixture.

The task is, for given input data x1 , x2 , ... , xN, to estimate the mixture

parameters j ,

j,

P(j) .

Tasks:

1. In each iteration of the implemented k-means algorithm, reestimate

means j and covariances j using the maximal likelihood

method. P(j) will be the relative number (percentage) of data points

classified to j-th cluster.

2. In each iteration, plot

parameters j , j , P(j) :

the

total

likelihood L of

estimated

The input consists of images of three letters, H, L, T. It is not known, which

letter is shown in which images. The task is to classify the images into three

classes. The images will be described by the two usual measurements:

x = (sum of pixel intensities in the left half of the image) - (sum of pixel

intensities

in

the right half

of

the

image)

y = (sum of pixel intensities in the upper half of the image) - (sum of pixel

intensities in the lower half of the image)

Tasks:

1. Download the images

measurements x and y.

of

letters image_data.mat,

compute

2. Using the k-means method, classify the images into three classes. In

each iteration, display the means j , current classification, and the

likelihood L .

3. After the iteration stops, compute and display the average image of

each of the three classes. To display the final classification, you can

use show_class function.

9

10

Class 1

11

Class 2

Class 3

12

5 Limitations

The greedy-descent nature of k-means on a non-convex cost implies that the

convergence is only to a local optimum, and indeed the algorithm is typically

quite sensitive to the initial centroid locations. In other words, initializing the

set of cluster representatives C differently can lead to very different clusters,

even on the same dataset D . A poor initialization can lead to very poor

clusters.

The local minima problem can be countered to some extent by running the

algorithm multiple times with different initial centroids and then selecting the

best result, or by doing limited local search about the converged solution. Other

approaches include methods that attempts to keep k-means from converging to

local minima. There are also a list of different methods of initialization, as well

as a discussion of other limitations of k-means.

As mentioned, choosing the optimal value of k may be difficult. If one has

knowledge about the dataset, such as the number of partitions that naturally

comprise the dataset, then that knowledge can be used to choose k .

Otherwise, one must use some other criteria to choose k , thus solving the

model selection problem. One naive solution is to try several different values of

k and choose the clustering which minimizes the k-means objective function

(Equation 1). Unfortunately, the value of the objective function is not as

informative as one would hope in this case. For example, the cost of the

optimal solution decreases with increasing k till it hits zero when the

number of clusters equals the number of distinct data points. This makes it

more difficult to use the objective function to

(a) directly compare solutions with different numbers of clusters and

13

Thus, if the desired k is not known in advance, one will typically run kmeans with different values of k , and then use some other, more suitable

criterion to select one of the results. For example, SAS uses the cubeclustering-criterion, while X-means adds a complexity term (which increases

with k ) to the original cost function (Eq. 1) and then identifies the k which

minimizes this adjusted cost. Alternatively, one can progressively increase the

number of clusters, in conjunction with a suitable stopping criterion. Bisecting

k-means achieves this by first putting all the data into a single cluster, and then

recursively splitting the least compact cluster into two using 2-means. The

celebrated LBG algorithm used for vector quantization doubles the number of

clusters till a suitable code-book size is obtained. Both these approaches thus

alleviate the need to know k beforehand.

k-means suffers from several other problems that can be understood by first

noting that the problem of fitting data using a mixture of k Gaussians with

identical, isotropic covariance matrices, ( 2 I ) where I is the identity

matrix, results in a soft version of k-means.

More precisely, if the soft assignments of data points to the mixture

components of such a model are instead hardened so that each data point is

solely allocated to the most likely component, then one obtains the k-means

algorithm. From this connection it is evident that k-means inherently assumes

that the dataset is composed of a mixture of k balls or hyperspheres of data, and

each of the k clusters corresponds to one of the mixture components. Because

of this implicit assumption, k-means will falter whenever the data is not well

described by a superposition of reasonably separated spherical Gaussian

distributions. For example, k-means will have trouble if there are non-convex

shaped clusters in the data. This problem may be alleviated by rescaling the

data to whiten it before clustering, or by using a different distance measure

that is more appropriate for the dataset. For example, information-theoretic

clustering uses the KL-divergence to measure the distance between two data

points representing two discrete probability distributions. It has been recently

shown that if one measures distance by selecting any member of a very large

class of divergences called Bregman divergences during the assignment step

and makes no other changes, the essential properties of k-means, including

guaranteed convergence, linear separation boundaries and scalability, are

retained. This result makes k-means effective for a much larger class of

datasets so long as an appropriate divergence is used.

Another method of dealing with non-convex clusters is by pairing k-means

with another algorithm. For example, one can first cluster the data into a large

number of groups using k-means. These groups are then agglomerated into

larger clusters using single link hierarchical clustering, which can detect

complex shapes. This approach also makes the solution less sensitive to

14

resolutions, one does not need to worry about choosing an exact value for k

either; instead, one can simply use a large value for k when creating the initial

clusters.

The algorithm is also sensitive to the presence of outliers, since mean is not a

robust statistic. A preprocessing step to remove outliers can be helpful. Postprocessing the results, for example to eliminate small clusters, or to merge

close clusters into a large cluster, is also desirable.

Another potential issue is the problem of empty clusters . When running kmeans, particularly with large values of k and/or when data resides in very high

dimensional space, it is possible that at some point of execution, there exists a

cluster representative cj such that all points xj in D are closer to some other

cluster representative that is not cj . When points in D are assigned to their

closest cluster, the jth cluster will have zero points assigned to it. That is, cluster

j is now an empty cluster. The standard algorithm does not guard against empty

clusters, but simple extensions (such as reinitializing the cluster representative

of the empty cluster or stealing some points from the largest cluster) are

possible.

7Available software

Because of the k-means algorithms simplicity, effectiveness, and historical

importance, software to run the k-means algorithm is readily available in

several forms. It is a standard feature in many popular data mining software

packages. For example, it can be found in Weka or in SAS under the

FASTCLUS procedure. It is also commonly included as add-ons to existing

software. For example, several implementations of k-means are available as

parts of various toolboxes in Matlab. k-means is also available in Microsoft

Excel after adding XL Miner. Finally, several stand-alone versions of kmeans exist and can be easily found on the Internet.The algorithm is also

straightforward to code, and the reader is encouraged to create their own

implementation of k-means as an exercise.

Briefly,

optical

character

recognition,

speech

recognition,

and

the literature on the subject offers a more in depth treatment of some other

practical applications, such as "data detection for burst-mode optical

receiver[s]", and recognition of musical genres. Researchers describe "burstmode data-transmission systems," a "significant feature of burst-mode data

transmissions is that due to unequal distances between" sender and receivers,

"signal attenuation is not the same" for all receivers.

Because of this,

15

network applications, [such as] optical bus networks [and] WDMA optical star

networks" can use burst-mode receivers.

In their paper, they provide a "new, efficient burst-mode signal detection

scheme" that utilizes "a two-step data clustering method based on a K-means

algorithm."

1. Further, although they could use maximum likelihood sequence estimation

(MLSE) to determine the class, it "is very computationally complex, and not

suitable for high-speed burst-mode data transmission."

approach based on k-means to solve the practical problem where simple MLSE

is not enough.

9 Conclusion

This project tried to explain about K-means clustering algorithm and its

application to the problem of un supervised learning. The k-means algorithm is

a simple iterative clustering algorithm that partitions a dataset into k clusters.

At its core, the algorithm works by iterating over two steps:

1) clustering all points in the dataset based on the distance between each

point and its closest cluster representative, and

2) re-estimating the cluster representatives.

Limitations of the k-means algorithm include the sensitivity of k-means to

initialization and determining the value of k. Despite its drawbacks, k-means

remains the most widely used partitional clustering algorithm in practice. The

algorithm is simple, easily understandable and reasonably scalable, and can be

easily modified to deal with different scenarios such as semi-supervised

learning or streaming data. Continual improvements and generalizations of the

basic algorithm have ensured its continued relevance and gradually increased

its effectiveness as well.

References

16

1. http://www.ideal.ece.utexas.edu/papers/km.pdf

2. http://www.science.uva.nl/research/ias/alumni/m.sc.theses/theses/NoahLaith.doc

3. http://cw.felk.cvut.cz/cmp/courses/ae4b33rpz/Labs/kmeans/index_en.html

clear;close all;

load('data.mat'); % X = 2x140

%% cast 1

model = kminovec(X, 4, 10, 1);

%% cast 2

% clear

Gmodel.Mean = [-2,1;1,1;0,-1]';

Gmodel.Cov (:, :, 1) = [ 0.1 0; 0 0.1];

Gmodel.Cov (:, :, 2) = [ 0.3 0; 0 0.3];

Gmodel.Cov (:, :, 3) = [ 0.01 0; 0 0.5];

Gmodel.Prior = [0.4;0.4;0.2];

gmm = gmmsamp(Gmodel, 100);

figure(gcf);clf;

17

ppatterns(gmm.X, gmm.y);

axis([-3 3 -3 3]);

model = kminovec(gmm.X, 3, 10, 1, gmm);

figure(gcf);plot(model.L);

%% cast 3

data = load('image_data.mat');

for i = 1:size(data.images, 3)

% soucet sum leva - prava cast obrazku

pX(i) = sum(sum(data.images(:, 1:floor(end/2) , i)))

- sum(sum(data.images(:, (floor(end/2)+1):end ,

% soucet sum horni - dolni

pY(i) = sum(sum(data.images(1:floor(end/2),: , i)))

- sum(sum(data.images((floor(end/2)+1):end , :,

end

...

i)));

...

i)));

show_class(data.images, model.class');

%% d

model = struct('Mean',[-2 3; 5 8],'Cov',[1 0.5],'Prior',[0.4

0.6;0]);

figure; hold on;

plot([-4:0.1:5], pdfgmm([-4:0.1:5],model),'r');

sample = gmmsamp(model,500);

[Y,X] = hist(sample.X,10);

bar(X,Y/500);

18

- Automated Marketing Research Using Online Customer ReviewsÎncărcat deamazaira
- Towards fact-based cluster policies (Eng)/ Hacia políticas cluster basadas en hechos (Ing)/ Gertaeretan oinarritutako kluster politiketarantza (Ing)Încărcat deEKAI Center
- Lx 3520322036Încărcat deAnonymous 7VPPkWS8O
- Combining Clustering Algorithms for Provide Marketing Policy in Electronic StoresÎncărcat deijplajournal
- knnÎncărcat dewoshidapangmao
- App Store Cluster AnalysisÎncărcat deAfnan Al-Subaihin
- K Mean NotesÎncărcat deAditi Chintamani
- Efficient Classification of Data Using Decision TreeÎncărcat deBONFRING
- 1-s2.0-S0022460X85703914-mainÎncărcat deMandar Chikate
- Extended Pso Algorithm ForÎncărcat deijmit
- ICCII-2018_paper_41Încărcat desravan.oem
- Omnipresence of Cluster Analysis for Optimal SolutionÎncărcat deInternational Journal of Innovative Science and Research Technology
- uoÎncărcat dehenry cayaban
- Cluster AnalisysÎncărcat deMuji Gunarto
- IRJET-V3I602.pdfÎncărcat dejbsimha3629
- Lecture 3. Partitioning-Based Clustering MethodsÎncărcat desharathdhamodaran
- CLUSTERING HYPERSPECTRAL DATAÎncărcat deCS & IT
- 1-s2.0-S0020025517306266-mainÎncărcat deRaul Arredondo Flores
- DM ClusteringÎncărcat deAlic Mcwan
- IIJCS-2018-07-20-6Încărcat deAnonymous vQrJlEN
- Nube DifusaÎncărcat deÖzgur Omay
- 1-s2.0-S0925443903001972-mainÎncărcat decailiii
- A Systematic study of Text Mining TechniquesÎncărcat deAnonymous qwgN0m7oO
- ClusterÎncărcat deapi-3804742
- ewsn11Încărcat deAmjad Arfin
- ClusteringÎncărcat deMAHESH
- Data Mining AbhasÎncărcat deMohit Chauhan
- A HYBRID ALGORITHM BASED ON WIFI FOR ROBUST AND EFFECTIVE INDOOR POSITIONINGÎncărcat deThông Hồ Sỹ
- GettingstartedÎncărcat demrmskyman
- Different data mining approaches.docxÎncărcat deSiddharth Jain

- Frequency CounterÎncărcat deIr Dennis N Mwighusa
- Neural Nw - Ant Colony AlgorithmÎncărcat deIr Dennis N Mwighusa
- 5 MultisismÎncărcat deIr Dennis N Mwighusa
- DIP - Image Restoration.pdfÎncărcat deIr Dennis N Mwighusa
- ADC - Digital ModulationÎncărcat deIr Dennis N Mwighusa
- Design Circuit by Multisism SoftwareÎncărcat deIr Dennis N Mwighusa

- !0 Cubic Function Notes For SAMÎncărcat deMelinda
- ADI Papers.xlsxÎncărcat dewrite2arshad_m
- AHC.pdfÎncărcat deM Abd Jabbar Hussein
- PPTÎncărcat deBindu Madhavi
- DSPFirst-L06Încărcat de01689373477
- Solution Smith RuleÎncărcat deThu Ha Nguyen
- 0DCC0F709EC84B6AA39A99CDC41411F1Încărcat deptm
- Sobel Erosion Dilation ExamplesÎncărcat deha has
- Turbo CodesÎncărcat deAgung Supe
- Signal Processing and DiagnosticsÎncărcat deChu Duc Hieu
- Workshop on ScilabÎncărcat deAnonymous f3e25V6Pn
- Chapter 5Încărcat desaipa1
- 2 Simulation of 16QAM SystemsÎncărcat deKhaya Khoya
- Binary Search Trees data structureÎncărcat deAbdullah
- Www Thelearningpoint Net Computer Science Data Structures QuÎncărcat deNapster
- homework_04.pdfÎncărcat deAnonymous 96NePMZ
- EC744note13Încărcat debinicle
- Chapter4.pdfÎncărcat deMarcialRuiz
- Optimization With GAMSÎncărcat deGaurav
- AE06Încărcat deSanjeev Panwar
- A comparative study of ANN, k-means and AdaBoost algorithms for image classification.pdfÎncărcat deMokong Soares
- DSP-1Încărcat deAmer Khan
- BookÎncărcat deDebashis Ghosh
- seem2004-02Încărcat deChow Kwok Kin
- Errors in Numerical MethodsÎncărcat deGeorge Ezar N. Quiriado
- UT Dallas Syllabus for ee4361.501.09f taught by Issa Panahi (imp015000)Încărcat deUT Dallas Provost's Technology Group
- e 42032732Încărcat deAnonymous 7VPPkWS8O
- Real Time System.docÎncărcat deLeonard Saja
- swt-dwt1Încărcat deFarrukh Shoukat Ali
- 07_Chapter7.pdfÎncărcat depollupoccu

## Mult mai mult decât documente.

Descoperiți tot ce are Scribd de oferit, inclusiv cărți și cărți audio de la editori majori.

Anulați oricând.