Sunteți pe pagina 1din 7

Classification of Campus E-Complaint Documents

using Directed Acyclic Graph Multi-Class SVM Based


on Analytic Hierarchy Process
Imam Cholissodin, Maya Kurniawati, Indriati, Issa Arwani
Informatics Department, PTIIK, Brawijaya University, Malang, Indonesia
E-mail: imamcs@ub.ac.id, 105060801111007@mail.ub.ac.id, indriati.tif@ub.ac.id, issa.arwani@ub.ac.id

AbstractE-Complaint
documents
provide
information that can be used to measure or
evaluate the services that given by campus to its
students, lecturers, staff, and public. Using text
classification, the documents can be classified
based on its importance and urgency. This
classification will be useful for campus to make the
services better. Classifying the documents can also
make the complaints follow-up from campus
become faster than before. This paper discussed
Directed Acyclic Graph Support Vector Machine
(DAGSVM) method based on Analytic Hierarchy
Process (AHP) to classify E-Complaint documents
into four classes based on the importance and
urgecy. Highest accuracy that is obtained from this
research is 82,61% with Sequential Training SVM
parameters are = 0.5, constant of = 0.01,
MaxIter = 10, and = 0.00001, training data 70%,
using stemming, and Gaussian RBF kernel without
using AHP weight.
Keywords: documents classification, E-Complaint,
AHP, DAGSVM.

quality assurance. Customer complaint is an


instrument in evaluation and early detection of system
flaws or implementation that does not meet the
standards. This is very important to achieve the World
Class University standard [1]. E-Complaint is a good
tool to measure the level of campus services.
Measurement of how good or bad the services relies
on complaint documents from E-Complaint system.
Using the documents, can be known that the
complaints, opinions, advices, or critiques should be
followed up immediately because of its urgency and
importance. The immediate follow up will make the
services of campus becoming better.

E-Complaint User

Fill Complaint Form at


Brawijaya University EComplaint Website

I. INTRODUCTION

E-

COMPLAINT service is one of the campus


facility that have an important function to get
feedbacks, for example complaints, advices, or
opinions from students, lecturers, staff, and public. It
is important to obtain customer satisfaction and
generate continuous improvement in campus. It
requires to set and fulfill the educational quality
standards consistently and continuously in relation to
Imam Cholissodin is with the Informatics Department of PTIIK
of Brawijaya University, Malang, Indonesia (phone: +62
85648067872; e-mail: imamcs@ub.ac.id).
Maya Kurniawati is with the Informatics Department of PTIIK of
Brawijaya
University,
Malang,
Indonesia
(e-mail:
105060801111007@mail.ub.ac.id).
Indriati is with the Informatics Department of PTIIK of
Brawijaya
University,
Malang,
Indonesia
(e-mail:
indriati.tif@ub.ac.id).
Issa Arwani is with the Informatics Department of PTIIK of
Brawijaya
University,
Malang,
Indonesia
(e-mail:
issa.arwani@ub.ac.id).

Tell related work unit


in Brawijaya
University about the
complaint

Chief of the work unit


give answer or
clarification about the
complaint

Center of Information,
Documentation, and
Complaints (PIDK),
Brawijaya University

Center of Information,
Documentation, and
Complaints (PIDK),
Brawijaya University
continue response to
user

User get response about


the complaint

Fig. 1. Stages of complaining using E-Complaint

Fig. 1 show about the stages of complaining using


E-Complaint. First, E-Complaint user have to fill
complaint form at the website of E-Complaint, then
Center of Information, Documentation, and

Complaints (PIDK) Brawijaya University will tell the


related work unit about the complaint. The next step is
the chief of the work unit will respond about the
complaint, continue it to Center of Information,
Documentation, and Complaints (PIDK). The last step
is Center of Information, Documentation, and
Complaints (PIDK) tell the user about the response.
The fact is the Center of Information,
Documentation, and Complaints or the campus can
not provide a rapid response for urgent and important
complaints. This problem is because campus does not
sort the documents by its importance and urgency or
does the classification manually by human. Because
of that, an intelligent system that can classify the
documents quickly based on its urgency and
importance need to be created. The previous research
related to classification based on urgency and
importance done by Horvitz, et al in case of classify
incoming email [2]. The incoming email classified in
two classes, high priority and low priority. High
priority is a class that containt email documents which
is urgent and important. Low priority is a class that
containt email documents which is not urgent and not
important.
Various algorithm [3] which are used for
classification are Decision Tree Learning, Nearest
Neighbor, Naive Bayes Classification, Neural
Network, Support Vector Machine (SVM), etc. Naive
Bayes is a simple probabilistic classifier based on
Bayes Theorem. This method need small amount of
training data to estimate parameter that needed in
classification. There are two kinds of classification in
Naive Bayes. They are flat classification and
hierarchical classification. In a flat classification
problem, each test example will be assigned a class C,
where C has a flat structure (there is no relationship
among the classes). This approach is often single
label, i.e. the classifier will only give output one
possible class for each test example [4]. Hierarchical
classification places new items into a collection with a
predefined hierarchical structure. The categories are
partially ordered, usually from more generic to more
specific [5]. However, it gives less performance in
classification compare with SVM that can classify
very effectively. SVM is a method to classify data in
two classes, positive and negative class. SVM can
handle documents with high-dimensional input space.
However, SVM have complex training and
categorizing algorithms and also the high time and
memory consumptions during training and classifying
stage [6].
Newer methods appear to classify more than two
class (multi-class) using SVM approach, they are
One-against-All (OAA) and One-against-One (OAO).
In OAA, for N-class (N>2) problem, N two-class
SVM classifier are constructed. The first SVM is
trained while labeling the samples in the first class as

positive examples and all the rest as negative


examples. The disadvantage of this method is its
training complexity. Each of the N classifier is trained
using all available samples. OAO algorithm use N(N1)/2 two-class classifiers. Each classifier is trained
using the samples of the first class as positive
examples and second class as negative examples.
OAO need more time in testing [7]. An advantage of
using a DAG is that some analysis of generalization
can be established. Also, considering the fact that
there is no need to traverse the whole structure of the
DAG, its testing time is less than one-against-one
methods and all the regions are classifiable [8].
Because of the effectivity in testing stage and high
accuracy in earlier research, in this paper we use
DAGSVM to classify E-Complaint documents based
on urgency and importance. The documents classified
in four classes, first class is important and urgent,
second class is important and not urgent, third class is
not important and urgent, fourth class is not important
and not urgent.
For training stage using DAGSVM, it is important
to give each data a label of the class. Labeling for
actual class in this research is done by an expert.
However, the expert labeling occur new problem. The
problem is the judgment by expert is too subjective.
The subjectivity can be reduced using Analytic
Hierarchy Process (AHP).
AHP is an intuitive method for formulating and
analyzing decisions. Because of its intuitive and
flexibility, many corporations use AHP for making
major policy decisions [9]. The weight vector that is
obtained by AHP is better than manually initialize by
experts. By using AHP, it can reduce subjective
judgment of the experts. AHP can also handle
structured or unstructured hierarchical model.
Previous research is done in combining AHP and
SVM for selection of transmission and transformation
station site was established. Data of the research use
expert scoring method. The score is varied between 0
and 1. In this stage, AHP is used to determine the
weight. Then SVM is used to do classification. The
research show that AHP and SVM can be combined
for better clasification result [10].
II. METHOD
A. Text Preprocessing
Text preprocessing is a process to change
documents in unstructured data to structured data.
This process is needed to identify words and phrase in
the documents that match a list of words. There are
four steps in this text preprocessing, first is mark the
phrase, second is tokenizing, third is filtering, and
fourth is stemming.
Mark the phrase is a process to find the phrase in
the documents that match with the list of phrases.

Phrase is a words combination that have one certain


meaning. Usually first word and second or third words
in phrase is separated by white space. In this process,
the white space in the phrase is replaced by dash
character (-).
Tokenizing is a process to separate words in
documents into single words. The single word is
called token or term, which is a set of character from
document that will be a semantic unit that can be
processed [11]. Tokenizing process will also eliminate
character that is not alphabet. In this process, dash
character is not eliminated because it mark a phrase.
Filtering is a process to get words that related and
relevant and remove the words that are not relevant.
The words list that are not relevant is called stopwords
list. In this process, the document will be compare
with the list. If the documents contain stopwords, then
the words in the documents will be eliminated.
Stemming is a process to change words to their
stem, base, or root form. In this paper, we use Porter
Stemming algorithm. First step is remove particle,
second is remove possesive pronoun, third is remove
1st order prefix, and fourth is remove 2nd order prefix
if needed.
B. Analytic Hierarchy Process (AHP)
Goal
Goal

Important
Important
and Urgent
(weight 1)
weight 1

term 11
term

Important
and
Important and
Not Urgent
Urgent
Not
(weight 2)
weight 2

...

term
5

Level
Level 00

Not
Not Important
Important
and Urgent
(weight 3)
weight 3

...

term
9

Not
Not Important
Important
and Not Urgent
Urgent
(weight 4)
weight 4

Level
Level 11

...

Level
Level 22

term
term15n

Fig. 2. Hierarchical model of E-Complaint Documents


Classification Problem (n is number of term features)

Application of AHP to combine with SVM involves


these three steps:
1. Structuring the decision problem into hiearchical
model. Fig. 2 show the hierarchical model of EComplaint documents classification problem.
2. Making pairwise comparisons and obtaining the
judgmental matrix. In this paper, AHP is used to
determine the weight for each class based on
user input of nine-point scale for three
combinations of pairwise class. Table I show the
nine-point scale of pairwise comparison in AHP
[12]. Table I scale is used to determine which
one is more important and urgent between two
pairwise class. There are three combinations of
pairwise class. They are class important and
urgent that is compared with class important and
not urgent, class important and urgent is
compared with class not important and urgent,
and class important and urgent is compared with
class not important and not urgent. For example,
input of the user is class important and urgent is

9 times of class not important and not urgent. It


means that class important and urgent is
extremely preferred over class not important and
not urgent.
3. Calculate the weight vector. The input scale then
determine the weight of each class by looking at
the judgmental matrix that obtain from the
previous process.
TABLE I
PAIRWISE COMPARISON SCALE FOR AHP PREFERENCES
Numerical
Rating
1
2
3
4
5
6
7
8
9

Linguistic judgments
X is equally preferred over Y
X is equally to moderately preferred over Y
X is moderately preferred over Y
X is moderately to strongly preferred over Y
X is strongly preferred over Y
X is strongly to very strongly preferred over Y
X is very strongly preferred over Y
X is very strongly to extremely preferred over Y
X is extremely preferred over Y

C. Support Vector Machine (SVM)


SVM need positive and negative training set. This
positive and negative training set is needed to help the
best decision in separating testing data into positive or
negative. Separator of these two classes is called
hyperplane. So, SVM is an supervised learning
classification method to determine hyperplane by
maximizing margin between two classes [7].
Basically, SVM finds a linear separating hyperplane
with maximal margin in the higher dimensional space
[13]. If w.x1+b=+1 is supporting hyperplane from +1
class and w.x2+b=-1 is supporting hyperplane from -1
class, then margin from those two class can be
calculated using distance of those two supporting
hyperplane.
Fig. 3 ilustrate optimal hyperplane that separate two
sets of data, positive and negative class.
wx b 1

H1
Margin = 2 / ||w||
H

w
H2

Optimal
hyperplane

w x b 0
Support Vector

w x b 1

Fig. 3. Ilustration of Support Vector Machine. There are two sets


of data that separated by optimal hyperplane. Data that are closest
to the hyperplane are called Support Vector.

These are some reasons why SVM should be good


to implement in document classification [14]:
High dimensional input space.
Few irrelevant features.
Document vectors are sparse.
Most text categoriztion problems are linearly
separable.

SVM can be used to solve linier and non linier


problem. Linier problem can be solved by getting
hyperlane using function as

training result (stochastic). Then sequential training


SVM is used as a simple alternative to find optimal
hyperplane. The algorithms are [17]:

f ( x) w . x b

1. Initialize i = 0 and other parameter, for example


contant of = 0.5, Learning Rate ()= 0.01,
C = 1, Maximum Iteration (MaxIter) = 10, and
= 0.00001. Then compute matrix Dij.

we have defined
n

w i yi xi

Dij yi y j ( K ( xi , x j ) 2 )

i 1

and

1
b w . x w . x
2
+

2.

i min{max[ (1 Ei ), i ], C i }
i i i

x and x is the data which is a support vector of


positive and negative class which has a weight value
() greatest in each class.

3.

Repeat Step 2 until stop because of Max


Iteration or max(||) < .

4.

After the training has converged, then find


support vector (SV). SV = i > Threshold.

The decision function is sign (f(x)), where :


m

f ( x) i yi x . xi b
i 1

n denotes number of data and m is number of support


vector. The function can determine whether data is in
positive class or in negative class.

sign( f ( x)) 1 for positive class


sign( f ( x)) 1 for negative class
SVM non linier can be solved by determining
kernel function that want to be used. In this paper, we
use Gaussian RBF kernel.

|| x y || 2

K ( x, y) exp
2
2

For each pattern, i = 1 to l, compute:


Ei l i Dij
j 1

E. Directed Acyclic Graph SVM (DAGSVM)


Directed Acyclic Graph is a directed graph with no
directed cycles. Introduced by Platt, et al [18], the
DAGSVM algorithm for training an N(N-1)/2
classifier is the same as in One-against-One (OAO).
In the testing stage, it depends on a rooted binary
directed acyclic graph to make decision [7]. The
difference with OAO is in its testing stage. DAGSVM
does testing stage for (N-1) times. It is an advantage
of using DAGSVM, the testing stage is faster than
OAO. Fig. 4 [19] is an ilustration of DAGSVM
classifier for four classes.
1
2
3
4

The result from Gaussian RBF kernel computation


then will be compute with Sequential Training SVM.
f(x) can be written as

not 1

f ( x) i yi K ( x, xi ) b

2
3
4

i 1

D. Sequential Training SVM


Quadratic Programming (QP) is one of the way to
find optimal hyperplane. However, QP is quite
complex, time consuming, and prone to numerical
instabilities. Other alternative for training is
Sequential Minimal Optimization (SMO). SMO is a
simple algorithm that can quickly solve the SVM QP
problem without any extra matrix storage and without
using numerical QP optimization steps at all. Unlike
the previous methods, SMO chooses to solve the
smallest possible optimization problem at every step
[15]. SMO approach is used to minimize memory
storage [16]. However, SMO still has complexity
because it only calculates two nearest data point in
every step and it has to do looping until all data point
are calculated. In addition, it also generates unstable

1 vs 4

not 2

3
4

3 vs 4

not 4
1
2
3

2 vs 4
not 4

not 1

2
3

2 vs 3

1 vs 3
not 3

1
2

1 vs 2

3
2
1
Fig. 4. Ilustration of DAGSVM classifier for four
classes.

Directed Acyclic Graph contains N(N-1)/2 two


kinds of classifier, each classifier are correspondence
with two class distributing in N structure, a node in
the top layer, called root node. Two nodes in the
second layer, in turn, the i layer has i nodes. During
these nodes, the i node in the i layer directs the i and

i+1 nodes in the j+1 layer, totally has N(N-1)/2


numbers intermediate nodes, each intermediate node
is one of the N(N-1)/2 two-class classifiers, and
completes two classified negative judgment. There are
N numbers leaf nodes which correspond to N number
classifications. Each node represents one decision
function and it can be classified by entering class
object X at the root node. Determine forward
classification route by the operation results of 0 or 1
of classification function for this node. In turn,
through (N-1) judgments, the final classification of X
was obtained by the output at the node of the last layer
[20].
F. Flowchart of E-Complaint Documents
Classification System
The E-Complaint documents classification system
can be ilustrated into flowchart that is show in Fig. 5.
Fig. 5 show that after input E-Complaint documents,
system will do text preprocessing to change
unstructured data of documents become structured
data. The result of text preprocessing is calculated and
get the number of feature. The number of feature that
we obtain have to be normalized using Min-Max
Normalization method.
Start

Input E-Complaint
documents

Text
Preprocessing

Feature
Normalization

III. RESULT
The number of total dataset that is used in this
paper is 153 documents in Indonesian language. From
this number, then data are divided into two kind of
data, training data and testing data. The number of
each kind of data is based on the ratio of the amount
training and testing data. Total for each training data
and testing data in every ratio is shown in Table II.
TABLE II
TOTAL OF TRAINING DATA AND TESTING DATA IN EVERY RATIO
Ratio
80% : 20%
70% : 30%
60% : 40%
50% : 50%
40% : 60%

Total of Training
Data
122
107
91
76
61

Total of Testing
Data
31
46
62
77
92

Testing is done in four testing scenarios. They are:


1. Testing of stemming and without stemming without
using AHP weight.
2. Testing of stemming and without stemming with
using AHP weight.
3. Testing of parameter , constant of , maximum
iteration, and value in Sequential Training SVM
4. Testing of AHP weight based on the value of
pairwise comparison scale.
A. Testing of Stemming and Without Stemming
Without Using AHP Weight
The result of this testing is shown in Fig. 6. Fig. 6
show that with stemming, the average accuracy is
higher than without stemming. This is because most of
term in between training data and testing data have
similarity in stem words than not stem words.

AHP
Weighting

DAGSVM

Output Classification
Result

Finish

Fig. 5. Flowchart of E-Complaint Documents Classification


System

After the data are normalized, compute the AHP


weighting process to determine the weight of each
class. DAGSVM is a big process where consist of two
smaller process, they are training process and testing
process. The training method that is used in this paper
is Sequential Training SVM. The testing phase is done
using directed acyclic graph decision by its sign(f(x))
value.

Fig. 6. Average Accuracy from Testing of Stemming and


Without Stemming Using Kernel Gaussian RBF Without
Using AHP Weight ( = 0.5, = 0.01, C = 1, MaxIter = 10,
= 0.00001)

B. Testing of Stemming and Without Stemming Using


AHP weight
The result of this testing is shown in Fig. 7. Fig. 7
show that with stemming, the average accuracy is
higher than without stemming. This is because most of
term in between training data and testing data have
similarity in stem words than not stem words. Input
scale is 3 for class important and urgent compare with
class important and not urgent. Second input scale is 5

for class important and urgent compare with class not


important and urgent. Third input scale is 7 for class
important and urgent compare with class not
important and not urgent.

that the accuracy use AHP Weighting have less


accuracy than without use AHP Weighting.

Fig. 10. Accuracy Result of Testing Maximum Iteration


Value in Sequential Training SVM Without Using AHP
Weight and Using AHP Weight ( = 0.5, = 0.01, C = 1,
= 0.00001)
Fig. 7. Average Accuracy from Testing of Stemming and
Without Stemming Using Kernel Gaussian RBF Using AHP
Weight ( = 0.5, = 0.01, C = 1, MaxIter = 10,
= 0.00001)

C. Testing of Parameter , Constant of , Maximum


Iteration, and Value in Sequential Training SVM
This testing use a set of data in 70% : 30% ratio
which have the best accuration. The best accuration
that get from previous test is 82,61% with Sequential
Training SVM parameters are = 0.5, constant of =
0.01, MaxIter = 10, and = 0.00001.
Fig. 8 show comparation between accuracy result of
testing values without using AHP weight and
accuracy result of testing values using AHP weight.
The result show that the accuracy use AHP Weighting
have less accuracy than without use AHP Weighting.

Fig. 8. Accuracy Result of Testing Value in Sequential


Training SVM Without Using AHP Weight and Using AHP
Weight ( = 0.01, C = 1, MaxIter = 10,
= 0.00001)

Fig. 9 show comparation between accuracy result of


testing constant of values without using AHP weight
and accuracy result of testing constant of values
using AHP weight. The result show that the accuracy
use AHP Weighting have less accuracy than without
use AHP Weighting.

Fig. 11 show comparation between accuracy result


of testing values without using AHP weight and
accuracy result of testing values using AHP weight.
The result show that the accuracy use AHP Weighting
have less accuracy than without use AHP Weighting.

Fig. 11. Accuracy Result of Testing Value in Sequential


Training SVM Without Using AHP Weight and Using AHP
Weight ( = 0.5, = 0.01, C = 1, MaxIter = 10)

D. Testing of AHP Weight Based on The Value of


Pairwise Comparison Scale
Fig. 12 show the accuracy result of testing weight.
The testing is done by modified the value of pairwise
comparison scale. The result show that the best
accuracy happen when combined the value of input
pairwise scale is 7 for class important and urgent
compare with class important and not urgent. Second
input scale is 5 for class important and urgent
compare with class not important and urgent. Third
input scale is 3 for class important and urgent
compare with class not important and not urgent. The
highest accuracy from this testing is 60.87%.

Fig. 12. Accuracy Result of Testing AHP Weight


( = 0.5, = 0.01, C = 1, MaxIter = 10, = 0.00001)

IV. CONCLUSION
Fig. 9. Accuracy Result of Testing Constant of Value in
Sequential Training SVM Without Using AHP Weight and
Using AHP Weight ( = 0.5, C = 1, MaxIter = 10,
= 0.00001)

Fig. 10 show comparation between accuracy result


of testing maximum iteration values without using
AHP weight and accuracy result of testing maximum
iteration values using AHP weight. The result show

The result of the testing show that AHP weighting


process based on the weight of each class can reduce
the accuracy. From Fig.2 can be known that the model
is divided into three levels, Level 0, Level 1, and
Level 2. This research use the hierarchical model to
the depth of Level 1 or use the weight of each class.
Due to the reduction of the accuracy, for the next

research, we suggest to use the hierarchical model that


is shown in Fig. 2 to the depth of Level 2 or
considering the weight of each feature or term. AHP
can handle as much as any level in the hierarchical
model and handle structured and unstructured
hierarchical tree model. Stemming process is
important. By using this process, system get better
accuracy. The parameters , constant of , MaxIter,
and of Sequential Training SVM is very important
because it can affect the accuracy. The right
combination of parameters will increase the accuracy.
The highest accuracy that is obtained from this
research is 82,61% with Sequential Training SVM
parameters are = 0.5, constant of = 0.01, MaxIter =
10, and = 0.00001, training data 70%, using
stemming, and Gaussian RBF kernel without using
AHP weight.
The dataset that is used in this research is small
because Center of Information, Documentation, and
Complaints (PIDK), Brawijaya University, concern
about privacy issues, not all of their data can be used
for the purposes of this research. So, for the next
research, we will add the number of dataset. This
research will have better result if the expert is more
than one, so it will prevent subjectivity of actual class
determination. Other way to prevent subjectivity is the
users of E-Complaint can sign the actual class of their
complaint.

[7]

[8]

[9]

[10]

[11]
[12]

[13]
[14]

[15]
[16]

[17]

[18]

ACKNOWLEDGMENT
We would like to express our very great
appreciation to Ms. Prima Vidya Asteria, M.Pd for her
advice and assistance in classification of the EComplaint documents and terms selection. We would
also like to thank to Center of Information,
Documentation, and Complaints (PIDK), Brawijaya
University for enabling us to get complaint
documents.
REFERENCES
[1]

[2]

[3]

[4]

[5]

[6]

E-Complaint Brawijaya University. (2014, Jan 7). EComplaint UB | Home. [Online]. Available: http://ecomplaint.ub.ac.id/.
E. Horvitz, A. Jacobs, and D. Hovel, Attention-Sensitive
Alerting, Proceedings of UAI 99, Conference on Uncertainty
and Artificial Intelligence, Stockholm, Sweden, pp. 305-313,
1999.
S. Joshi and B. Nigam, Categorizing The Document Using
Multi Class Classification in Data Mining, International
Conference on Computational Intelligence and
Communication Systems, IEEE, pp. 251-255, 2011.
C. N. Silla Jr and A. A. Freitas, A Global-Model Nave Bayes
Approach to the Hierarchical Prediction of Protein Functions,
Ninth IEEE International Conference on Data Mining, 2009.
D. Ghazi, D. Inkpen, and S. Szpakowicz, Hierarchical versus
Flat Classification of Emotions in Text, Proceedings of the
NAACL HLT 2010 Workshop on Computational Approaches
to Analysis and Generation of Emotion in Text, pp. 140-146,
2010.
A. Khan, B. Baharudin, L. H. Lee, and K. Khan, A Review of
Machine Learning Algorithms for Text-Documents

[19]

[20]

Classification, Journal of Advances in Information


Technology, Vol. 1, No.1, pp. 4-20, 2010.
G. Madzarov, D. Gjorgjevikj, and I. Chorbev, A Multi-class
SVM Classifier Utilizing Binary Decision Tree, Journal
Informatica 33, pp. 233-241, 2009.
C. W. Hsu and C. J. Lin, A Comparison of Methods for
Multiclass Support Vector Machines, Neural Networks, IEEE
Transactions Vol. 13 No. 2, pp. 415-425, 2002.
M. R. Kim and D.S. Cho, The Design of The Data
Preprocessing using AHP in Automatic Meter Reading
System, IJCSI International Journal of Computer Science
Issues, Vol. 10, Issue 1, No.1, pp. 130-134, 2013.
Y. Yang, Q. Du, J. Zhao, The Application of Sites Selection
Based on AHP-SVM in 500KV Substation, Logistics
Systems and Intelligent Management, 2010 International
Conference on. Vol. 2. IEEE, pp. 1225-1229, 2010.
I. H. Witten, Text Mining, Practical Handbook of Internet
Computing, 2005.
J. Yuan, Worker Evaluation Using FCE and AHP, 2013
Fifth International Conference on Intelligent Human-Machine
Systems and Cybernetics, pp. 568-571, 2013.
C. W. Hsu, C. C. Chang, and Chih-Jen Lin, A Practical Guide
to Support Vector Classification, 2010.
T. Joachims, Text Categorization with Support Vector
Machines: Learning with Many Relevant Features, Springer
Berlin Heidelberg, pp. 137-142, 1998.
J.C. Platt, A Fast Algorithm for Training Support Vector
Machines, Technical Report MSR-TR-98-14, 1998.
A. Urmaliya and Dr. J. Singhai, Sequential Minimal
Optimization for Support Vector Machine with Feature
Selection in Breast Cances Diagnosis, Proceedings of the
2013 IEEE Second International Conference on Image
Information Processing, 2013.
S. Vijayakumar and S. Wu, Sequential Support Vector
Classifiers and Regression, Proceeding International
Conference on Soft Computing (SOCO 99), Genoa, Italy, pp.
610-619, 1999.
J. C. Platt, N. Cristianini, J. Shawe-Taylor, Large Margin
DAGs for Multiclass Classification, MIT Press, pp. 547-553,
2000.
Y. Kaminka and E. Granot. (2014, Feb 8). Multiclass SVM
and Applications in Object Classification. [Online].
Available:
http://www.wisdom.weizmann.ac.il/~bagon/CVspring07/prs/m
ulticlassSVM5.pps .
W. Xing-wei, The Fault Diagnosis of Blower Ventilator
Based-on Multi-class Support Vector Machines, International
Conference on Future Electrical Power and Energy Systems,
pp. 1193-1200, 2012.

S-ar putea să vă placă și