Documente Academic
Documente Profesional
Documente Cultură
a r t i c l e
i n f o
Keywords:
Fuzzy time series
Adaptive order selection
Self-organising maps
FOREX
Prediction
a b s t r a c t
An adaptive ordered fuzzy time series is proposed that employs an adaptive order selection algorithm for
composing the rule structure and partitions the universe of discourse into unequal intervals based on a
fast self-organising strategy. The automatic order selection of FTS as well as the adaptive partitioning of
each interval in the universe of discourse is shown to greatly affect forecasting accuracy. This strategy is
then applied to prediction of FOREX market. Financial markets, such as FOREX, are generally attractive
applications of FTS due to their poorly understood model as well as their great deal of uncertainty in
terms of quote uctuations and the behaviours of the humans in the loop. Specically, since the FOREX
market can exhibit different behaviours at different times, the adaptive order selection is executed online
to nd the best order of the FTS for current prediction. The order selection module uses voting, statistical
analytic and emotional decision making agents. Comparison of the proposed method with earlier studies
demonstrates improved prediction accuracy at similar computation cost.
2010 Elsevier Ltd. All rights reserved.
1. Introduction
Forecasting time series data from a time-dependant sequence of
continuous values is important in a wide array of applications such
as monitoring the air pollution in the environment, estimating
blood pressure, predicting market trends in both stocks and foreign
exchange markets (Li & Cheng, 2007). In 1993, Song and Chissom
proposed a new concept of time series data prediction, namely
Fuzzy Time Series (FTS) which uses the notion of fuzzy sets and
approximate reasoning (Song & Chissom, 1993a, 1993b, 1994).
They studied the problem of forecasting fuzzy time series by using
the enrolment data in the University of Alabama and proposed a
forecasting model that is mainly composed of ve steps: (1) partitioning the universe of discourse into equal intervals, (2) dening
fuzzy sets on the universe of discourse and fuzzifying the time series accordingly, (3) mining the fuzzy logical relationships that exists in the fuzzied time series, (4) forecasting and then (5)
defuzzifying the forecasted output. Song and Chissom showed
these steps to reduce the time complexity of FTS in comparison
with the previous studies.
Since the contribution of Song and Chissom, a number of other
studies have been presented to either reduce computational
overhead or increase forecasting accuracy. For example, to reduce
* Corresponding author. Tel.: +31 53 489 3765; fax: +31 53 489 4590.
E-mail addresses: M.Bahrepour@utwente.nl (M. Bahrepour), Akbarzadeh@
ieee.org (Mohammad-R. Akbarzadeh-T.).
0957-4174/$ - see front matter 2010 Elsevier Ltd. All rights reserved.
doi:10.1016/j.eswa.2010.06.087
computational overhead being produced in deriving the fuzzy relationship in Song and Chissoms model, Sullivan and Woodall
proposed a Markov-based model (Sullivan & Woodall, 1994) using
a conventional matrix multiplication. Also in 1994, Song and
Chissom applied a rst-order time-variant strategy for forecasting
enrolment and discussed the differences between time-variant and
time invariant models (Song & Chissom, 1994). To improve forecasting accuracy, Chen presented an efcient forecasting procedure
for prediction of enrolments in the University of Alabama using
simplied arithmetic operations (Chen, 1996) that reduced the
complex arithmetic operations to some essential operations. Huarng proposed heuristic models by integrating problem specic
heuristic knowledge with Chens model to reduce forecasting error
(Huarng, 2001). Chen in his later works proposed a high-order fuzzy time series in which more than one step behind are given in the
inputs of FTS for prediction (Chen, 2002). His work was compared
with the previous studies that used only one previous step to provide the prediction. The high-order FTS revealed that prediction
accuracy ratio is signicantly increased by using higher order of inputs (more than one step behind as the input of FTS). Yu proposed
a weighted averaging operator to record occurrences of each fuzzy
relation and applied a weighting factor for the defuzzication (Yu,
2005). Li et al. proposed deterministic automatons to deal with the
uncertainties in defuzzifying phase and partitioning phase (Li &
Cheng, 2007). Bahrepour et al. modied Yus weighting model by
partitioning the universe of discourse unequally by using genetic
algorithm (Bahrepour et al., 2008). In their study, genetic algorithm
476
Ft Ft 1 Rt; t 1 or
Ft Ft 2 Rt; t 2 or
Ft Ft m Rt; t m
...
Or alternatively
Ft Ft 1 [ Ft 2 [ [ Ft m Rt; t m
Ft Ft 1; Ft 2; . . . ; Ft m Ra t; t m
The equation is called the mth-order of F(t), and Ra(t, t m) is a relation matrix to describe the fuzzy relationship between
F(t 1), F(t 2), . . . , F(t m) and F(t) (Chen, 2002). In short, Eq. (2)
means that more than one input in composition with a relational
matrix can produce the predicted result.
Based upon the above preliminaries, the proposed approach
(similar to most other approaches on fuzzy time series) is presented in Section 4.
3. An introduction to self-organising maps
SOMs consist of components called nodes that are centres of
clusters in clustering application of SOMs . . . (Software, 2004). In
our study, a simple self-organising map is used to bundle historical
data into clusters. These clusters are then used for partitioning the
universe of discourse unequally. The Kohonen training algorithm is
widely used in this network as follows:
(1) Initialise centre of each cluster, Ci (i = 1, 2, . . . , n), randomly.
(2) Grab an input vector
(3) Traverse each centre of cluster
a. Use a similarity measure to nd the distance between
each cluster centre Ci (i = 1, 2, . . . , n) and the input data
vector Dj. Euclidean distance is a common measure of
similarity, as in this paper, and is calculated a bellow:
dC il ; Djl
Xm
2
C
D
il
jl
l1
where Cil, Djl are the lth elements of two vectors Ci and Dj, and
m is the vector dimension.
b. Find the cluster centre C* which produces the smallest
distance with the input vector.
(4) Update the neighbours of the cluster centre C*, Cv
(v = 1, 2, . . . , k), where Cv is a neighbour of the C* and k is
the number of C*s neighbours. This update is performed
by pulling Cv closer to the input data vector D(t) by using
the bellow formula
C v t 1 C v t HtatDt C v t
where t is the current iteration, H(t) is restraint due to distance from C*, and a(t) is learning restraint due to time.
(5) Increment t and repeat while t < T, where T is limit on time
iterations.
or
The outputs of SOM is a set of Ci (i = 1, 2, . . . , n) which is a set of
cluster centres. Further information on SOMs can be found in
477
Gurney (1997), Demuth, Beale, and Hagan (2006), Gupta, Jin, and
Homma (2003).
4. Proposed approach
In this study, two modications are proposed on Chens
high-order fuzzy time series (Chen, 2002). First modication is partitioning the universe of discourse unequally by using the SOM,
and the second is to adaptively nd the best order of the FTS.
SOM is important due to its fast clustering function which can bundle data into clusters faster than GA (Section 4.4 addresses this
time complexity). In the previous studies such as Chens and Bahrepours (Bahrepour et al., 2008; Chen & Chung, 2006), GA was
used to nd the best length of the intervals. To reduce computational overhead during the partitioning of the universe of discourse, SOM is recommended here; and to augment prediction
accuracy ratio, the adaptive order selection is introduced.
In the following, the proposed algorithm is presented in several
steps. An example on the USD/JPY currency-pair serves to illustrate
the approach. This approach is then applied in Section 5 to FOREX
daily dataset.
4.1. The algorithm
Step 1. Partition the universe of discourse U into n unequal
intervals (where U = {u1, u2, . . ., un}. This partitioning is accomplished by the following routine:
I. Find the centre of n clusters (c1, c2, . . . , cn) using SOM.
II. Let Dmin and Dmax be the minimum value and the maximum
value of the historical data (minimum and maximum quotes
in FOREX dataset example). Let U = [Dmin D1, Dmax + D2] be
the universe of discourse, where D1 and D2 are two proper
positive numbers for marginal extensions (that might be
needed for unseen data), then U is partitioned into n unequal
intervals by the below rule:
Table 1
Several USD/JPY quotes with their corresponding
linguistic values.
c1 c2
c1 c2 c2 c3
; u2
;
; ...;
u1 Dmin D1 ;
2
2
2
cn1 cn
; Dmax D2
un
2
In the USD/JPY currency-pair example, (Dmin D1) = 102 and
(Dmax D2) = 123. The universe of discourse is partitioned into
seven unequal intervals and the outputs of SOM are c1 = 108,
c2 = 110, c3 = 114, c4 = 115, c5 = 117, c6 = 119, c7 = 121. Therefore,
u1 = [102, 109], u2 = [109, 112], u3 = [112, 113], u4 = [113, 116],
u5 = [116, 118], u6 = [118, 120], u7 = [120, 123].
Step 2. Dene fuzzy sets on the universe of discourse U and
fuzzify the historical data. A fuzzy set Ai of U is dened as
Ai fAi u1 =u1 fAi u2 =u2 fAi un =un , where fAi uj indicates
the grade of membership ujin Ai
Linguistic value
104.20
108.34
119.25
112.23
122.45
A1
A1
A6
A3
A7
Table 2
The fuzzy logical relationships for the rst-order model.
A1 ? {A2, A2, A3}
A4 ? {A5, A6}
A3 ? {A1}
A6 ? {A4, A5}
If the data (e.g. the quotes in the example of FOREX dataset) obtains
highest membership degree with Ak, then the fuzzied data is
labelled as Ak. For example, linguistic values are dened as:
A1
A2
A3
A4
A5
A6
A7
USD/JPY quotes
Table 3
The fuzzy logical relationships for the high-order model.
Second-order
Third-order
#, A1, A1 ? {A2}
A1, A1, A2 ? {A1, A3}
A2, A2, A3 ? {A2, A3, A3}
A6, A7, A7 ? #
Fourth-order
#, A1, A1, A1 ? {A2, A4}
A1, A1, A2, A2 ? {A1, A3, A3}
A1, A2, A2, A3 ? {A2, A3, A4, A4}
A6, A6, A7, A7 ? #
Fifth-order
#, A1, A1, A1, A2 ? {A2, A4, A5}
A1, A1, A2 ,A2, A3 ? {A3, A3}
A1, A2 ,A2, A3, A3 ? {A2, A4}
A5, A6, A6, A7, A7 ? #
478
Pk
First-order FTS:
Rule 1: If F(t 1) = Ai, and Ai ? { }, i.e. there is no match/precedence in historical data for Ai, then the predicted result
^ at time t is the midpoint of interval ui being centre of
y
the ith cluster (ci) in which the maximum membership
degree of Ai is located
^ ci
y
In other words, in the absence of earlier historical data with
similar conditions, the best assumption is that there is no
change in the time series. For example, we have the following fuzzy relationships and the input is A7; since
^ c7 where c7 is the centre of cluster for u7
A7 ! f g; y
where maximum membership of A7 is located.
A1 ? {A2, A2, A3}
A4 ? {A5, A6}
A3 ? {A1}
A6 ? {A4, A5}
^ cj
y
For example we have the following fuzzy relationships and
^ c1 where c1 is
the input is A3; since A3 ? {A1} therefore, y
the centre of cluster for u1 where maximum membership of
A1 is located.
A1 ? {A2, A2, A3}
A4 ? {A5, A6}
A3 ? {A1}
A6 ? {A4, A5}
^
y
1
c2 c6 c6 c6
4
1 i cjk1i
Pk
i1 i
where cj1, cj2, . . . , cjk are the midpoints (the cluster centres) of
the intervals uj1, uj2, . . . , ujk in which the maximum membership degree of Aj1, Aj2, . . . , Ajk are located, respectively.
For example we have the following fuzzy relationships and
^ is:
the input is A3, A7; since A3, A7 ? { }therefore, y
^
y
2 c7 1 c3
3
Third-order
#, A1, A1 ? {A2}
A1, A1, A2 ? {A1, A3}
A2, A2, A3 ? {A2, A3, A3}
A6, A7, A7 ? #
Fourth-order
Fifth-order
Rule 2: If F(t 1) = Aj1, Aj2, . . . , Ajk and Aj1, Aj2, . . . , Ajk ? {Aj1}, where k
is the order of FTS and variable j shows linguistic labels are
^ at time t is the midvaried, then the predicted result y
point (cluster centre or cj1) of interval uj1 in which the maximum membership degree of Aj1 is located
^ cj1
y
For example, if we have the following fuzzy relationships A1,
^ is:
A2 ? {A1}, and the input is A1 ; A2 ; y
^ c1
y
H
1 X
cji
H i1
where cj1, cj2, . . . , cjH are the midpoints (of the cluster centres) of the interval uj1, uj2, . . . , ujH in which the maximum
membership degree of Aj1, Aj2, . . . , AjH are located, respectively.
For example, if we have the following fuzzy relationships
^ is computed as:
A5 ? {A2, A6, A6, A6} and the input is A5, y
^
y
i1 k
^
y
Third-order
#,A1, A1 ? {A2}
A1, A1, A2 ? {A1, A3}
A2, A2, A3 ? {A2, A3, A3}
A6, A7, A7 ? #
Fourth-order
Fifth-order
A3 ? {A1}
A6 ? {A4, A5}
Rule 3: If F(t 1) = Aj1, Aj2, . . . , Ajk and Aj1, Aj2, . . . , Ajk ? {Aj1, Aj2, . . . ,
AjH} (H > 1), where k is the order of FTS and variable j shows
^ at
linguistic labels are varied, then the predicted result y
time t is
^
y
H
1 X
cji
H i1
479
Start
1
^ c1 c2 c3 c3
y
4
where c1, c2, c3 are the midpoints (cluster centres) of the
intervals u1, u2, u3, where maximum membership of A1, A2,
A3 are located respectively.
Second-order
Third-order
#,A1, A1 ? {A2}
A1, A1, A2 ? {A1, A3}
A2, A2, A3 ? {A2, A3, A3}
A6, A7, A7 ? #
Fourth-order
Fifth-order
Step 5. Compute quantitative measures H and V, as dened below, from the above grouping process of causal relations.
Denition 3. Let H (number of hits) be the number of matched
patterns in the historical data for a given set of antecedent fuzzy
propositions. For example, if F(t 1) = Aj1, Aj2, . . . , Ajk and
Aj1, Aj2, . . . , Ajk ? {Aj1, Aj2, . . . , Ajp}, where k is the order of FTS
(k P 1) and variable j shows linguistic labels can be varied, then
H is the same as the cardinality of H = j{Aj1, Aj2, . . . , Ajp}j = p.
Denition 4. Let V (dispersion) indicate the dispersion among the
elements of a group. For example, consider the relationship, like
F(t 1) = Aj1, Aj2, . . . , Ajk, where k is the order of FTS (k P 1) and variable
j
shows
linguistic
labels
are
varied,
and
Aj1, Aj2, . . . , Ajk ? {A1, A3, A3, A5}. Assume that the maximum membership degrees of {A1, A3, A3, A5} are located in {c1, c3, c3, c5}, respectively. Dispersion can be numerically obtained from the following
formula:
H
1 X
V
ci c2
H i1
V (dispersion) is the same as variance, H is the number of hits (Definition 3), cis are cluster centres, c is the mean value of the participating cis.
In Step 4, m fuzzy relationships are driven (rst-order, second-order up to mth-order, where m is chosen by the user). In
Step 4, m predicted results are obtained. In this step, a total of
m Vs and Hs are computed. These m predicted results along
with m Hs and Vs are used in the following adaptive order
selection module.
Step 6. Find the best order among the m predicted results by
using the adaptive order selection module and present the predicted result which is chosen by the adaptive order selection module in the output. Algorithm of the adaptive order selection module
is detailed in Section 4.2. Fig. 1 shows the complete owchart of
the proposed approach.
4.2. Adaptive order selection algorithm
The adaptive order selection module employs three agents (voting, statistical and emotional) to nd the best order. These agents
work sequentially by the following order. Fig. 2 shows the algorithm of the adaptive order selection module.
End
480
Yes
No
Yes
No
Yes
No
Emotional Agent (AEM)
simply a repetition of the previous step (see Step 4 and Denition 3) and these answers are not usually good choices. H = 1 is
desirable because it means the predicted result has already
occurred in the historical data once and the next step is likely
to be the same as what has happened in the past. Therefore,
the rst stage seeks an order z s.t. 9z r2z 0; Hz > 0;
z 1; 2; . . . ; m. If the rst phase fails to nd the appropriate
order, the second phase tries to nd the predicted result with
smallest variance and largest H. Therefore an order z is desirable
here where 9z; 8i r2z < r2i ; Hz > Hi ; Hz ; Hi > 0; i; z 1; 2;
. . . ; m. The answer with greater H and lesser r2 is the answer
481
which is several times repeated in historical data with less dispersion which implies the higher order or the greater H is
always chosen. If these two phases both failed to nd the best
order, the next agent (emotional agent) will choose the best
order.
3. Emotional Agent (AE): Decide about the order by using the
emotional agent. The two aforementioned methods use rational
agents and when rationality cannot nd a solution, emotions
should be utilised, just like the human decision making process.
This technique can be also regarded as integration of human/
expert knowledge to an expert system; however, since this
information comes from the hunches of the FTS users, this is
an emotional signal (further information is available in Sections
4.3.34.3.5). Therefore, integration of human knowledge or
hunches according to the previous experiences is called emotional signal (Antoine Bechara, 2005). According to Bechara et
al. the right combination of rationality and emotion yields
advantageous decision making (Antoine Bechara, 2005). To do
an advantageous decision making, the emotional decision making agent chooses the ith-order of FTS where i is varied based
upon the dataset which the FTS is used for. In our study i is
three for FOREX application. Investigations by Li and Cheng
(2008) and our observations show that three (3) is an appropriate order for nancial data prediction. Therefore this experience
is formulated as an emotional signal in the function of the nal
issue in the decision making procedure. Consequently, if the
two previous agents fail to nd the best order, hunches of the
FTS users in the role of an emotional signal choose the best or
the ith-order (3rd order on FOREX historical data). For other
applications, the hunches of FTS users on the best order should
be learnt and replaced with i.
4.3. Introduction to the techniques used in adaptive order selection
module
To nd the best order, three different agents are employed. Each
agent is an approach to the problem solving and tries to nd the
optimal solution. To introduce why these agents were chosen,
the following subsections include necessary information about
each agent and the corresponding technique it uses.
Table 4
Time complexity comparison.
Time complexity
The Proposed Approach
OProposedApproach Ok n m2 p2 m2
where k is the number of training epochs for SOM, p is the number of training data, n is the number of intervals, and mis the maximum order chosen by user (the prediction is carried out from 1st order up to mth-order)
OHFTS On p k p2 k p
where n is the number of intervals, p is the total number of training data, and k is the order of the FTS
OFTSGA Op2 G I q2
where p is the total number of training data, G is the number of generations, I is the number of individuals and q is the number of
training data are used in genetic algorithm (usually q < p to speed up the process of partitioning)
Table 5
MSE error rates (average error rates along with its SDa).
The proposed method
USD/EUR
USD/GBP
EUR/GBP
a
Mean
SDa
Mean
SDa
Mean
SDa
1.1042e005
7.1063e005
8.6845e005
4.2306e004
5.6123e006
1.9308e005
2.1757e004
1.5131e004
4.7017e004
1.7702e004
6.2356e005
2.5174e004
0.0011
9.4688e005
2.2954e004
3.4695e004
2.2318e005
8.1593e005
Standard deviation.
482
OClustering Ok 1 n n N BMU
where k is the number of training epochs, n is the number of intervals, and NBMU is the maximum number of neighbours for best
matching. SOM and its method of clustering is described in Section
3.
Time complexity of making predictions is as follow:
"
OPrediction O p n
m
X
j
j1
p
X
!
i p
m
X
!#
j
j1
i1
where p is the number of training data being used in FTS prediction, mis the maximum order was chosen by the user (the prediction is accomplished from 1st order up to mth-order).
n) is
P (p
Pp
m
the time complexity of fuzzifying historical data.
j1 j
i1 i
is
time complexity of making m fuzzy relationships and
the
Pm
p j1 j is the time complexity of making prediction from
the derived fuzzy relationships (the time complexity for prediction
of rst-order is (p) for second-order is (2 p) and for having m
prediction
from rst-order up to mth-order, the time complexity
P
is p m
j1 j ).
The time complexity of adaptive order selection is:
"
OAutomaticOrderSelection O
m
X
i
!
i m
m
X
j 1
Fig. 3. (a)(c) One of the predicted signals for each currency-pair. The proposed
method has been compared with high-order method (Chen, 2002) and FTS with
genetic algorithm (Chen & Chung, 2006).
483
OClustering Ok n
OPrediction O
m
X
j
j1
p
X
!
i
i1
m m 1 p p 1
O
2
2
Om2 p2
OAutomaticOrderSelection O
m
X
i
m
X
!
j
m m 1
O 2
2
Om2
As a result, the total time complexity of the proposed approach is:
OProposedApproach Ok n m2 p2 m2
MAPE
MSE
n
1X
Real Value Forecated Value
n j1
Real Value
n
1X
Forecated Value Real Value2
n j1
Table 6
MAPE error rates (Average error rates along with its SDa).
The proposed method
USD/EUR
USD/GBP
EUR/GBP
a
Mean
SDa
Mean
SDa
Mean
SDa
0.0032
0.0083
0.0057
0.0103
0.0013
0.0035
0.0119
0.0154
0.0245
0.0041
0.0037
0.0078
0.0066
0.0115
0.0145
0.0078
0.0028
0.0028
Standard deviation.
484
and using the adaptive order selection to nd out the best order at
different times.
However, this hybrid algorithm also requires more computation. While this limitation is not severely hampering for the considered example, since decisions are made only on a daily basis
here, it can become a limiting factor when fast decision and adaptation is required. Future direction of this research is to reduce execution time by using more parallel agents. We also believe better
performance can be obtained by exploiting the uncertainty in the
information and the decision making process.
Appendix A
A.1. Time complexity of the FTS with genetic algorithm
The time complexity of the FTS with genetic algorithm (OFTSGA)
introduced in Chen and Chung (2006) is as follows:
OGeneticAlgorithm OG IP c Pm OF y 1
where G is the number of generations, I is the number of individuals, Pc is the combination probability, Pm is mutation probability, OF y
is the time complexity of the tness function and 1 is the time complexity of selection being performed one time for each individual in
the each generation. Time complexity of OF y for the problem of partitioning the universe of discourse is:
"
OF y O n q
q
X
i q
OF y O
i1
q q 1
Oq2
2
OGeneticAlgorithm OG IP c Pm q2 1
Again we use the most time consuming term in the above polynomial to calculate big O in order to nd the time complexity of genetic algorithm in partitioning the universe of discourse unequally,
which is:
OGeneticAlgorithm OG I q2
The time complexity of OPrediction is:
"
OPrediction O n p
p
X
!
i
i1
p p 1
Op2
2
OFTSGA Op2 G I q2
where p is the total number of training data, G is the number of generations, I is the number of individuals and q is the number of training data are used in genetic algorithm (usually q < p to speed up the
process of partitioning).
A.2. Time complexity of Chens high-order method
The time complexity of Chens high-order model (OHFTS) which
is introduced in Chen (2002) is as follows:
"
OHFTS O n p k
p
X
i k p
i1
OPrediction O
p
X
OHFTS On p k p2 k p
i1
q
X
complexity of deriving fuzzy relationships and p is the time complexity of nding the appropriate fuzzy relationship and defuzzication.
To calculate the time complexity, the most time consuming
term should be chosen. Therefore we would have:
#
ip
i1
References
Antoine Bechara, A. R. D. (2005). The somatic marker hypothesis: A neural theory of
economic decision. Games and Economic Behavior(52), 336372.
Bahrepour, M., Akbarzadeh-T., M.-R., & Yaghoobi, M. (2008). A novel fuzzy time
series. In 13th Iranian computer conference, Kish Island, Persian Gulf.
Bechara, A., & Damasio, A. R. (2005). The somatic marker hypothesis: A neural
theory of economic decision. Games and Economic Behavior(52), 336372.
Chen, S.-M. (1996). Forecasting enrollments based on fuzzy time series. Fuzzy Sets
and Systems(81), 311319.
Chen, S.-M. (2002). Forecasting enrollments based on high-order fuzzy time series.
Cybernetics and Systems: An International Journal(33), 116.
Chen, S.-M., & Chung, N.-Y. (2006). Forecasting enrollments of students by using
fuzzy time series and genetic algorithms. Information and Management Sciences,
17(3), 117.
Chevaleyre, Y. et al. (2006). A short introduction to computational social choice.
Publications of the Universiteit van Amsterdam (Netherlands).
Demuth, H., Beale, M., & Hagan, M. (2006). Neural network toolbox, for use with
MATLAB. Users guide. The MathWorks.
Gupta, M. M., Jin, L., & Homma, N. (2003). Static and dynamic neural networks, from
fundamentals to advanced theory. IEEE Press.
Gurney, K. (1997). An Introduction to Neural Networks. CRC Press.
Huarng, K. (2001). Heuristic models of fuzzy time series for forecasting. Fuzzy Sets
and Systems, 123, 369386.
Li, S.-T., & Cheng, Y.-C. (2007). Deterministic fuzzy time series model for forecasting
enrollments. Computer and Mathematics with Applications, 53(12), 19041920.
Li, Shen-Tun, & Cheng, Yi-Chung (2008). Deterministic fuzzy time series model for
forecasting enrollments. Computer and Mathematics with Applications.
Neapolitan, R. E., & Naimipour, K. (2004). Foundations of algorithms using C++
pseudocode: Using C++ pseudocode. Jones & Bartlett Publishers.
Software, I.O. Self-Organizing Maps Overview (2004). Available from: <http://
www.improvedoutcomes.com/docs/WebSiteDocs/SOM/Overview_of_SelfOrganizing_Maps_SOMs_.htm>.
Song, Q. B. S. C. (1993). Fuzzy time series and its models. Fuzzy Sets and Systems, 54,
269277.
Song, Q., & Chissom, B. S. (1993a). Forecasting enrollments with fuzzy time series
part I. Fuzzy Sets and Systems(54), 19.
485
Sullivan, J., & Woodall, W. H. (1994). A comparison of fuzzy forecasting and Markov
modeling. Fuzzy Sets and Systems, 64, 279293.
Yu, H.-K. (2005). Weighted fuzzy time series models for TAIEX forecasting.
Physica(349), 609624.