Documente Academic
Documente Profesional
Documente Cultură
in
algorithms,
back-prop
competitive
learning,
e.
recognition field,
in
cluster
structure.
quantization
non-neural
clustering,
discovering
approach,
et
clustering
ARTs
algorithm
cs
unsupervised
ub
for
vector
networks,
functions,
Iterative
vector
quantization
and
w.
ww
www.csetube.in
www.csetube.in
4 hours)
Slides
03-09
Why ART ?
2. ART Networks
Vigilance parameter,
Reset module;
ART
Simple
Networks;
in
approach;
Distance
functions;
Vector
17-24
Quantization
ub
Non-neural
e.
3. Iterative Clustering
25-59
cs
et
5. References
ww
procedure, ART2.
w.
60
02
www.csetube.in
www.csetube.in
category
prototype
vector
matches
to
close
enough
to
the
which permits learning. The network learns only in its resonant state.
in
e.
ART systems
are
et
ub
well suited
w.
ww
03
cs
www.csetube.in
www.csetube.in
SC - ART description
phenomenon,
contradiction
between
plasticity
and
stability,
is
in
ub
e.
cs
et
ww
04
w.
www.csetube.in
www.csetube.in
SC - Recap learning algorithms
system are provided, and the ANN is used to model the relationship
between the two. Given an input set x, and a corresponding output
set y,
y = f(x) + e
where,
is
useful
when
we
want
the
network
to
reproduce
the
in
input-output relationship.
g(x, y) of the input and output sets, the goal is to minimize the cost
At each
training
ub
x, and y).
e.
and
et
the network, and the network produces a result. This result is put into
the total cost is used to update the weights.
cs
w.
ww
cost function is known, but a data set is not know that minimizes that
cost function over a particular input space.
In backprop network learning, a set of input-output pairs are given
are not well suited for tasks where input space changes and are
often
semantics of
www.csetube.in
www.csetube.in
SC ART-Competitive learning
The
neurons (or units) compete to fire for particular inputs and then learn to
respond better for the inputs that activated them by adjusting their
weights. An example of competitive learning network is shown below.
Input Units i
Output Units j
1
W11
x2
in
x1
ub
e.
x3
Wij = W35
et
cs
All input units i are connected to all output units j with weights Wij .
w.
ww
clusters.
www.csetube.in
www.csetube.in
SC ART-Competitive learning
[Continued from previous slide]
and
the
xi wij
i=1
= X Wj
= Wj X
and then the output unit with the highest activation is selected for
further processing; this implied competitive.
Assuming that output unit k has the maximal activation, the weights
wk (t + 1) =
in
e.
ub
only the weights at the winner output unit k are updated and all other
et
(xi - wij )
ww
aj = {
w.
output unit j is as
cs
i=1
the weights of the output units with the smallest activation are updated
according to
wk (t + 1) = wk (t) + {x (t) + wk (t)}
A competitive network, on the input patterns, performs an on-line clustering
process and when complete the input data are divided into disjoint clusters
such that similarity between individuals in the same cluster are larger than
those in different clusters. Stated above, two metrics of similarity
measures: one is Inner product and the other Euclidean distance. Other
metrics of
www.csetube.in
www.csetube.in
SC ART-Competitive learning
[Continued from previous slide]
deemed necessary.
Competitive learning does not guarantee stability in forming clusters.
If the learning rate
the learning
rate
may become
referred
such
occurrence as the
in
e.
ww
w.
cs
et
ub
08
www.csetube.in
www.csetube.in
SC ART-Stability-Plasticity dilemma
can
we
continue
to
quickly
learn
new
things
about
the
patterns
and
learned patterns?
How does the system know to switch between its plastic and stable
modes.
is the method by which the system can retain previously
in
What
e.
ub
has
been developed
to
cs
ART
et
w.
The
ww
can preserve its previously learned knowledge while keeping its ability
to learn new patterns.
ART is a family of different neural architectures. ART architecture
www.csetube.in
www.csetube.in
SC - ART networks
Adaptive
Resonance
Theory
(ART)
networks
are
self-organizing
to
many
iterative
clustering
algorithms
where
the
the
terms
"nearest"
concept of "resonance".
in
as ARTMAP. Here the algorithms cluster both the inputs and targets and
e.
ub
and
cs
a reset module
et
ww
w.
Reset F2
Reset
Module
Comparison Field
F1 layer
Normalized Input
www.csetube.in
Vigilance
Parameter
www.csetube.in
SC - ART networks
Comparison field
The
comparison
field
takes
an
input
vector
(a
one-dimensional
Recognition field
Each recognition field neuron, outputs a negative signal proportional to
that neuron's quality of match to the input vector to each of the
other recognition field neurons and inhibits their output accordingly. In
this way the recognition field exhibits lateral inhibition, allowing each
neuron in it to represent a category to which input vectors are classified.
in
Vigilance parameter
of
the
recognition
match
ub
strength
e.
vigilance
parameter.
The
vigilance
produces
and
ww
Lower
highly
w.
grained categories),
cs
Higher
et
general categories).
11
www.csetube.in
www.csetube.in
SC - ART networks
Reset Module
The reset module compares the strength of the recognition match to
the vigilance parameter.
If the vigilance threshold is met, then training commences.
Otherwise,
parameter,
if
the
then
match
the
firing
level
does
recognition
not
meet
neuron
is
the
vigilance
inhibited
until
If
no
committed
recognition
in
by a recognition match.
neuron's
match
meets
the
vigilance
ub
towards
e.
ww
w.
cs
et
12
www.csetube.in
www.csetube.in
SC - ART networks
as
algorithms.
The simplest ART network is a vector classifier. It accepts as input a
vector
and
classifies
it
into
category
depending
on
the
stored
match
any
stored
pattern
within
certain
tolerance,
then
in
e.
ub
ww
w.
sufficiently similar.
13
and
cs
the
et
www.csetube.in
www.csetube.in
SC - ART networks
Recognition Field
F2 layer, STM
New
cluster
LTM
Adaptive Filter
path
Expectation
Comparison Field
F1 layer, STM
Reset
Module
Vigilance
Parameter
in
Normalized Input
e.
ub
et
w.
Reset mechanism :
cs
ww
There are two sets of connections, each with their own weights, called :
Bottom-up weights from each unit of layer F1 to all units of layer F2 .
Top-down weights from each unit of layer F2 to all units of layer F1 .
14
www.csetube.in
www.csetube.in
SC - ART networks
are
Supervised ARTs
are
named
with
the
suffix
"MAP", as
ARTMAP,
in
both the inputs and targets, and associate the two sets of clusters.
Fuzzy ART and Fuzzy ARTMAP are generalization using fuzzy logic.
e.
ub
ART Networks
cs
et
Grossberg, 1976
ART1 , ART2
Carpenter &
Grossberg,
1987
ww
w.
Unsupervised ART
Learning
Supervised ART
Learning
Fuzzy ART
ARTMAP
Fuzzy
ARTMAP
Gaussian
ARTMAP
Carpenter &
Grossberg,
etal 1987
Carpenter &
Grossberg,
etal 1987
Carpenter &
Grossberg,
etal 1987
Williamson,
1992
Simplified
ART
Simplified
ARTMAP
Baraldi &
Alpaydin,
1998
Baraldi &
Alpaydin,
1998
www.csetube.in
www.csetube.in
SC - ART networks
represent such
is processed by
in
e.
ub
w.
cs
et
ww
www.csetube.in
www.csetube.in
SC - Iterative clustering
3. Iterative Clustering
Organizing data into sensible groupings is one of the most fundamental mode
of understanding and learning.
Clustering is a way to form 'natural groupings'
or
clusters
of
patterns.
or
clustering,
objects
according
to
measured
or
perceived
intrinsic
characteristics or similarity.
Cluster analysis does not use category labels that tag objects with prior
absence
of
category
information,
distinguishes
the
clustering
in
(supervised learning).
ub
e.
ww
w.
cs
et
17
www.csetube.in
www.csetube.in
SC - Iterative clustering
Example :
Three natural groups of data points, that is three natural clusters.
Y
X
In
clustering,
the
task
is
to
learn
classification
from
the
data
in
e.
decision
theoretic
approaches
et
the
ub
cluster
formation
among
cs
w.
ww
www.csetube.in
www.csetube.in
SC - Recap distance functions
An element of R
denotes Rn.
a real number. Similarly the other element Y = (y1, y2, yi., yn)
The vector space operations on R
are defined by
and
on Rn, is given by
is a real number.
in
et
XY
cs
||X|| ||Y||
w.
-1
by
n
i=1 (xi yi)2
ub
e.
ww
n
i=1 (xi - xi)2
www.csetube.in
www.csetube.in
SC Recap distance functions
Euclidean Distance
It is also known as
Euclidean metric,
is
and
Q = (q1 , q2 , . . qi . . , qn)
and
ub
e.
in
et
w.
ww
20
cs
www.csetube.in
www.csetube.in
SC - Vector quantization
Vector
Quantization
(VQ)
is
non-neural
approach
to
dynamic
in
ub
e.
et
w.
i=1
1/2
cs
| Xp Cj | =
d =
( X i CJ i )2
| X
ww
Ck | < | X
j = 1, . . , M
Cj | =
minimum
jk
Ck |
| Xp Ck | <
2.
| Xp Ck | >
= (1/Nx )
Sn
www.csetube.in
www.csetube.in
SC - Vector quantization
10
11
12
distance
and
accordingly
in
e.
ub
(3,3)
(2.5 , 3)
3,
(2,6)
3.041381
4,
(3,6)
3.041381
5,
(6,3)
4.5
6,
(7,3)
5.5
7,
(6,4)
3.640054
ww
w.
cs
2,
8,
(7,4)
4.609772
9,
(2,4)
1.11803
et
Input
Pattern
1, (2,3)
Cluster no
assigned to
i/p pattern
1
1
(2 , 6)
(2.5 , 6)
5.408326
(6 , 3)
6.264982
(6.5 , 3)
4.031128
1.118033 (6.333333,
3.3333)
(2.33333,
4.924428
0.942809
2.06155
4.527692
2.0615528
(6.5, 3.5)
3.33333)
10, (3,4)
0.942808
3.5355339
11, (2,7)
3.5355339
12, (3,7)
3.5707142
0.9428089
(2.5, 3.5)
6.333333)
(2.5, 6.5)
4.9497474
www.csetube.in
www.csetube.in
SC - Vector quantization
[Continued from previous slide]
Y
8
Clusters formed
C2
C3
No of clusters : 3
Cluster centers :
C1 = (2.5, 3.5) ;
C1
C2 = (2.5, 6.5);
C3 = ( 6.5, 3.5).
0
0
Clusters Membership :
in
ub
et
e.
cs
ww
23
w.
See next slide, clusters for threshold distances as 3.5 and 4.5 .
www.csetube.in
www.csetube.in
SC - Vector quantization
Example 2
The input patterns are same as of Example 1.
Determine the clusters, assuming the threshold distance = 3.5 and 4.5.
follow the same procedure as of Example 1 ;
do computations to form clusters, assuming
Y
8
7
6
C1
C2
C1
0
0
w.
ww
Fig (b) for the threshold distance = 3.5 , two clusters formed.
Fig (c) for the threshold distance = 4.5 , one cluster formed.
24
www.csetube.in
cs
et
ub
e.
in
www.csetube.in
SC - ART Clustering
The taxonomy of important ART networks, the basic ART structure, and
the general ART architecture have been explained in the previous slides.
Here only Unsupervised ART (ART1 and ART2) Clustering are presented.
ART1 is a clustering algorithm can learn and recognize binary patterns. Here
similar data are grouped into cluster
reorganizes clusters based upon the changes
creates new cluster when different data is encountered
ART1
patterns.
architecture,
the
model
description,
the
pattern
matching
in
e.
ww
w.
cs
et
ub
25
www.csetube.in
www.csetube.in
SC - ART1 architecture
Gain
2
Orienting sub
system
G2
Top-Dn
weights
wij
LTM +
C
vji
Bottom-up
weights
Gain
1
G1
+
Reset
[1
e.
IH=h
ub
[1
- ------- ----------
et
IH=1
Vigilance
parameter
in
Pi
0]
cs
w.
ww
Comparison layer F1
and
Recognition
and Gain2.
layer
for
controlling
the
attentional
sub-system
overall
tolerated
the
input
pattern
vectors
and
the
weights
www.csetube.in
www.csetube.in
SC - ART1 Model description
and
an orienting subsystem,
matching
operation during
the network
structure
Attentional Subsystem
(a) F1 layer of neurons/nodes called or input layer or comparison layer;
short term memory (STM).
(b) F2 layer of neurons/nodes called or output layer or recognition layer;
short term memory (STM).
in
(c) Gain control unit , Gain1 and Gain2, one for each layer.
e.
ub
et
cs
ww
w.
(f) Interconnections among the nodes in each layer are not shown.
(g) Inhibitory connection (-ve weights) from F2 layer to gain control.
(h) Excitatory connection (+ve weights) from gain control to F1 and F2.
Orienting Subsystem
(h) Reset layer for controlling the attentional subsystem overall dynamics.
(i) Inhibitory connection (-ve weights) from F1 layer to Reset node.
(j) Excitatory connection (+ve weights) from Reset node to F2 layer
27
www.csetube.in
www.csetube.in
SC - ART1 Model description
Comparison
F1
and
the
Recognition F2 layers
F1 receives the binary external input,
external input to
recognition
layer F2
for
then
matching
it
F1
to
then
If Yes (match) then a new input vector is read and the cycle
starts again
If No (mismatch) then the orienting system inhibits the previous
category to get
in
To F2
From F2
ub
From
Gain2
G2
Unit x2j
in F2
From
orient
et
w.
From
Gain1
Unit x1i
in F1
cs
G1
e.
To orient
vji
To other nodes
in F2 (WTA)
ww
Ii
wij
From F1
To all F1
and G1
vji made
the neuron
neuron.
orienting sub-system.
28
'
www.csetube.in
www.csetube.in
SC - ART1 Pattern matching
ART
network
determine
structure
whether
an
does
input
pattern
pattern
matching
and
tries
among
the
patterns
is
to
matching
consists
of
Input
pattern
presentation,
Pattern
=Y
activation
is
produced
0 =S
ub
=X
w.
cs
et
F1
also
excites
sub-
e.
in
across F1.
G1
0 =I
ww
the
orientation
so
that A remains
inactive.
signal is
through
the
www.csetube.in
www.csetube.in
SC - ART1 Pattern matching
=Y
0
0 =U
G1
V=
1
0
0 = S*
0
0
= X*
F1
any
inhibitory
signal
+ U is transformed to pattern V, by
LTM
1
on
the
top-down
0 =I
traces
in
ub
e.
et
cs
Among the three possible sources of input to F1 or F2, only two are
used at a time. The units on F1 and F2
ww
w.
out of the possible three sources of input are active. This feature is
Due to the 2/3 rule, only those F1 nodes receiving signals from both
I and V will remain active. So the pattern that remains on F1 is I V .
www.csetube.in
www.csetube.in
SC - ART1 Pattern matching
F2
=Y
Orientation
becomes
sub-system
active
due
A
to
G1
F1
Nodes
on
F2
responds
not
0 =I
respond;
If
nodes
are
in
e.
prevents the same node winning the competition during the next cycle.
ub
et
ww
w.
cs
31
www.csetube.in
www.csetube.in
SC - ART1 Pattern matching
= Y*
The
original
pattern
is
G1
0 =S
=X
F1
pattern
remains
0 =I
in
e.
ub
et
cs
ww
modification.
w.
cycle.
The
connection
participating
in
mismatch
are
not
down
into
resonate
state.
During
this
stable
state,
connections remain active for sufficiently long time so that the weights
are strengthened. This resonant state can arise only when a pattern
match occurs or during enlisting of new units on F2 to store an
unknown pattern.
32
www.csetube.in
www.csetube.in
SC - ART1 Algorithm
Notations
I(X) is input data set ( ) of the form I(X) = { x(1), x(2), . . , x(t) }
Example :
W22
W31
W32
i = 1, n ;
of
the
form
ub
W21
in
W12
e.
W11
et
cs
vj=1
v11
w.
v12
v13
ww
vj=2
For any two vectors u and v belong to the same vector space R, say
u, v
u X v = (u1 v1 , . . . ui vi . . un vn )
component by component.
The u v
| ui |
i=1
www.csetube.in
www.csetube.in
SC - ART1 Algorithm
Vigilance parameter
0 otherwise
1
G2 =
Output:
by
if input IH 0
0 otherwise
in
e.
of the cluster.
ww
w.
cs
et
ub
34
www.csetube.in
www.csetube.in
SC - ART1 Algorithm
Step - 1 (Initialization)
Initially, no
G1 = 0,
input vector
G2 = 0.
is applied, making
the
control gains,
Initialize bottom-up wij (t) and top-down vji (t) weights for time t.
then
Example : If n = 3;
column vectors
W11
W12
W21
W22
W31
W32
e.
in
Wj=1 Wj=2
ub
wij = 1/4
et
cs
w.
type m x n.
ww
vj (t) = [(vj1 (t) . . . . vji (t) . . . vjn (t)] T, T is transpose and vji = 1 .
Each line is a column vector
v11
vj=1
v12
vj=2
v13
Learning rate = 0, 9
0.3 0.5
www.csetube.in
www.csetube.in
SC - ART1 Algorithm
to
the F1
is presented to the
network. Then
As input I 0 , therefore node G1 = 1
and
e.
all
in
ww
w.
cs
et
ub
36
www.csetube.in
means not
www.csetube.in
SC - ART1 Algorithm
yj
i=1
Ii x wij
W12
W11
y j=1 =
y j=2 =
W21
I1 I2 I3
I1 I2 I3
W22
W32
W31
y k calculated in step 4.
no of nodes
in F2
yk
max (y j )
j=1
go to step 6.
e.
Else
in
Calculated in step 4,
ub
Note :
cs
et
w.
Let us say the winner is the second node between the equals, i.e., k = 2.
ww
r =
If
r >
||X(t)||
X(t)
||X(t)||
www.csetube.in
an
www.csetube.in
SC - ART1 Algorithm
*
*
*
X k = ( x 1 , x 2 , , x i=n )
x i = vki x Ii
where
is the
XK
Xk
IH
Xi
i=1
and input
Xk
Ii
in
i=1
IH=1
i=1
n
38
Ii
ub
ww
i=1
et
Xi
cs
X K=2
and input
Xk
w.
e.
*
X K=2 = {0 0 1} , IH=1 = {0 0 1}
Example : If
www.csetube.in
IH
is
IH using :
www.csetube.in
SC - ART1 Algorithm
The similarity
X K=2
IH=1
It means
is
Associate Input
IH=1
>
*
X K=2
, IH=1
is true. Therefore,
vk i (new)
vk i (t) x Ii where i = 1, 2, . . . , n ,
in
vk i (new)
where i = 1, 2, . . , n
0.5 + || vk i (new) ||
e.
wk i (new)
ub
(d) Update weight matrix W(t) and V(t) for next input vector, time t =2
cs
et
vj=1
v11
v12
v13
ww
w.
vj=2
Wj=1 Wj=2
W11
W12
W21
W22
W31
W32
39
www.csetube.in
www.csetube.in
SC - ART1 Numerical example
1, 2, 3, 4, 5, 6, 7
Input:
The decimal numbers 1, 2, 3, 4, 5, 6, 7 given in the BCD format.
This input data is represented by the set() of the form
I(X) = { x(1), x(2), x(3), x(4), x(5), x(6), x(7) } where
Decimal nos
BCD format
x(1) = { 0 0 1}T
x(2) = { 0 1 0}T
x(3) = { 0 1 1}T
x(4) = { 1 0 0}T
x(5) = { 1 0 1}T
x(6) = { 1 1 0}T
x(7) = { 1 1 1}T
ub
e.
in
The variable t is time, here the natural numbers which vary from
et
1 to 7, is expressed as t = 1 , 7 .
cs
t = 1, 7 represents 7 vectors.
w.
ww
this means ,
two clusters,
therefore
m = 2 neurons.
40
www.csetube.in
output layer F2
contains
www.csetube.in
SC - ART1 Numerical example
Step - 1 (Initialization)
Initially, no
G1 = 0,
input vector
G2 = 0.
is applied, making
the
control gains,
Initialize bottom-up wij (t) and top-down vji (t) weights for time t.
where i = 1, n ; j = 1, m ;
wij = 1/4
column vectors
W11
W12
W21
W22
W31
W32
1/4
1/4
1/4
1/4
1/4
1/4
where t=1
ub
in
Wj=1 Wj=2
e.
so
here n = 3;
et
cs
w.
ww
vj (t) = [(vj1 (t) . . . . vji (t) . . . vjn (t)] T, T is transpose and vji = 1 .
Each line is a column vector vj=1
v(t) = (vji (t)) =
v11
v12
vj=2
v13
where t=1
Learning rate = 0, 9
Special Rule
While
indecision , then
between equal.
41
www.csetube.in
the
winner
is
second
www.csetube.in
SC - ART1 Numerical example
= 1 to h=7
presented
to
the F1 layer; that is I(X) = { x(1), x(2), x(3), x(4), x(5), x(6), x(7) }
Step 3 (Choose input pattern vector)
Present a randomly chosen input data in B C D format as input vector.
Let us choose the data in natural order, say x(t) = x(1) = { 0 0 1 }
and
e.
all
in
ww
w.
cs
et
ub
42
www.csetube.in
means not
www.csetube.in
SC - ART1 Numerical example
yj
i=1
Ii x wij
1/4
W11
y j=1 =
W21
I1 I2 I3
1/4
W31
W22
I1 I2 I3
1/4
1/4
1/4
W12
y j=2 =
1/4
1/4
1/4
W32
max (y j )
in
j=1
e.
yk
y k calculated in step 4.
ub
Else
cs
Calculated in step 4,
et
Note :
w.
indecision tie. [Go by Remarks mentioned before, how to deal with the tie].
ww
Let us say the winner is the second node between the equals, i.e., k = 2.
Perform vigilance test, for the F2k output neuron , as below:
Vk
r =
||X(t)||
11 1
X(t)
||X(t)||
0
0
1
|X(t)|
= 1
i=1
Thus r >
www.csetube.in
an
www.csetube.in
SC - ART1 Numerical example
*
*
*
X k = ( x 1 , x 2 , , x i=n )
where
x i = vki x Ii
is the
XK
Accordingly
= {1 1 1} x {0 0 1}
X K=2
= {0 0 1}
IH
Xi
i=1
here n = 3
Ii
e.
i=1
X K=2
i=1
n
IH
Ii
ww
IH=1
Xi
w.
X k and input
et
Similarity between
*
ub
*
X K=2 = {0 0 1} , IH=1 = {0 0 1}
while
cs
Accordingly ,
using :
in
Xk
and input IH
Xk
i=1
44
www.csetube.in
is
www.csetube.in
SC - ART1 Numerical example
The similarity
X K=2
IH=1
It means
is
Associate Input
IH=1
>
*
X K=2
, IH=1
is true. Therefore,
vk i (new)
vk i (t=1) x Ii where i = 1, 2, . . . , n = 3 ,
vk=2 (t=2)
vk=2, i (t=1)
0
0
1
111
00 1
in
x Ii
e.
ub
vk i (new)
where i = 1, 2, . . , n = 3 ,
0.5 + || vk i (new) ||
vk=2, i (t=2)
0 0 1
w.
ww
wk=2 (t=2)
cs
et
wk i (new)
2/3
0.5
001
(d) Update weight matrix W(t) and V(t) for next input vector, time t =2
vj=1
v11
v12
v13
vj=2
Wj=1 Wj=2
W11
W12
W21
W22
W31
W32
1/4
45
www.csetube.in
1/4
1/4
0
0
2/3
www.csetube.in
SC - ART1 Numerical example
I2 = { 0 1 0 } ;
vj=1
Wj=1 Wj=2
W(t=2) =
1/4
1/4
1/4
2/3
v(t=2) =
vj=2
1/4
y j=1 =
1/4
1/4
0.25
1/4
y j=2 =
2/3
et
0
1
0
X(t=2)
cs
111
||X(t=2)||
|X(t=2)|
w.
r=
ub
Do vigilance test ,
VTk=1
so K = 1
e.
in
i=1
ww
component by component
*
X K=1 =
Vk=1, i x IH=2, i
= {1 1 1} x {0 1 0}
*
X K=1
IH=2
i=1
n
K=1
0 1 0
= {0 1 0} and IH=2 = {0 1 0} as
Xi
IH=2, i
i=1
www.csetube.in
www.csetube.in
SC - ART1 Numerical example
X K=1
Similarity
is
IH=2
>
is true.
vk=1,
vk =1, i (t=2) x
(new) =
vk=1, (t=3)
vk=1, i (t=2)
0 1 0
x IH=2, i
0
1
0
111
in
where i = 1, 2, . . . , n = 3 ,
IH=2, i
where i = 1, 2, . . , n = 3 ,
ub
vk =1, i (new)
(new) =
0.5 + || vk=1,
(new) ||
et
wk=1,
e.
cs
0 1 0
w.
vk=1, i (t=3)
ww
wk=1, (t=3)
2/3
0.5
010
(d) Update weight matrix W(t) and V(t) for next input vector, time t =3
vj=1
v11
v12
v13
vj=2
Wj=1 Wj=2
W11
W12
W21
W22
W31
W32
47
www.csetube.in
2/3
2/3
www.csetube.in
SC - ART1 Numerical example
I3 = { 0 1 1 } ;
vj=1
Wj=1 Wj=2
W(t=3) =
2/3
v(t=3) =
2/3
vj=2
y j=1 =
2/3
2/3
0.666
y j=2 =
2/3
0.666
2/3
in
ub
et
Decision K = 2
cs
Do vigilance test ,
w.
VTk=2 X(t=3)
||X(t=3)||
0
1
1
001
ww
r=
e.
= 0.5
|X(t=3)|
i=1
component by component
*
X K=2 =
Vk=2, i x IH=3, i
= {0 0 1} x {0 1 1}
=
=
0 0 1
X K=2
IH=3
i=1
n
Xi
1/2
IH=3, i
i=1
www.csetube.in
0.5
www.csetube.in
SC - ART1 Numerical example
X K=1
Similarity
is
0.5
IH=2
>
vk=2,
vk =2, i (t=3) x
(new) =
vk=2, (t=4)
vk=2, i (t=3)
0 0 1
x IH=3, i
0
1
1
001
in
where i = 1, 2, . . . , n = 3 ,
IH=3, i
where i = 1, 2, . . , n = 3 ,
ub
vk =2, i (new)
(new) =
0.5 + || vk=2,
(new) ||
et
wk=2,
e.
cs
0 0 1
w.
vk=2, i (t=4)
ww
wk=2, (t=4)
2/3
0.5
001
(d) Update weight matrix W(t) and V(t) for next input vector, time t =4
vj=1
v11
v12
v13
vj=2
Wj=1 Wj=2
W11
W12
W21
W22
W31
W32
49
www.csetube.in
2/3
2/3
www.csetube.in
SC - ART1 Numerical example
IH
I4 = { 1 0 0 } ;
vj=1
Wj=1 Wj=2
W(t=3) =
2/3
v(t=3) =
2/3
vj=2
y j=1 =
2/3
y j=2 =
2/3
||X(t=4)||
in
|X(t=4)|
i=1
cs
1
0
0
e.
X(t=4)
010
ub
r=
T
V k=1
et
Do vigilance test ,
w.
O1(t = 4) = 0.
ww
Put Output
r=
T
k=2
X(t=4)
||X(t=4)||
1
0
0
|X(t=4)|
i=1
O2(t = 4) = 0.
www.csetube.in
www.csetube.in
SC - ART1 Numerical example
Update weight matrix W(t) and V(t) for next input vector, time t =5
W(4) = W(3) ;
V(4) = V(3) ;
O(t = 4) = { 1
vj=1
v11
v12
v13
1}
vj=2
Wj=1 Wj=2
W11
W12
W21
W22
W31
W32
else
in
ww
w.
cs
et
ub
e.
51
www.csetube.in
2/3
2/3
www.csetube.in
SC - ART1 Numerical example
I5 = { 1 0 1 } ;
vj=1
Wj=1 Wj=2
W(t=5) =
2/3
v(t=5) =
2/3
y j=1 =
2/3
vj=2
y j=2 =
= 2/3
2/3
||X(t=5)||
= 0.5
cs
w.
|X(t=5)|
i=1
ub
X(t=5)
1
0
1
et
r=
T
k=2
001
e.
Do vigilance test ,
so K = 2
in
ww
component by component
*
X K=2 =
Vk=2, i x IH=5, i
= {0 0 1} x {1 0 1}
*
X K=2
IH=5
i=1
n
K=2
0 0 1
= {0 0 1} and IH=5 = {1 0 1} as
Xi
1/2
IH=5, i
i=1
www.csetube.in
0.5
www.csetube.in
SC - ART1 Numerical example
X K=1
Similarity
is
0.5
IH=2
>
is true.
vk=2,
vk =2, i (t=5) x
(new) =
vk=2, (t=6)
vk=2, i (t=5)
0 0 1
x IH=5, i
1
0
1
001
in
where i = 1, 2, . . . , n = 3 ,
IH=5, i
where i = 1, 2, . . , n = 3 ,
ub
vk =2, i (new)
(new) =
0.5 + || vk=2,
(new) ||
et
wk=2,
e.
cs
0 0 1
w.
vk=2, i (t=6)
ww
wk=2, (t=6)
2/3
0.5
001
(d) Update weight matrix W(t) and V(t) for next input vector, time t =6
vj=1
v11
v12
v13
vj=2
Wj=1 Wj=2
W11
W12
W21
W22
W31
W32
53
www.csetube.in
2/3
2/3
www.csetube.in
SC - ART1 Numerical example
I6 = { 1 1 0 } ;
vj=1
Wj=1 Wj=2
W(t=6) =
2/3
v(t=6) =
2/3
y j=1 =
2/3
vj=2
= 2/3
y j=2 =
2/3
||X(t=6)||
= 0.5
cs
w.
|X(t=6)|
i=1
ub
X(t=6)
1
1
0
et
r=
T
k=1
010
e.
Do vigilance test ,
so K = 1
in
ww
component by component
*
X K=1 =
Vk=1, i x IH=6, i
= {0 1 0} x {1 1 0}
*
X K=1
IH=6
i=1
n
K=1
0 1 0
= {0 1 0} and IH=6 = {1 1 0} as
Xi
1/2
IH=5, i
i=1
www.csetube.in
0.5
www.csetube.in
SC - ART1 Numerical example
X K=1
Similarity
IH=6
is
0.5
>
is true.
vk=1,
vk =1, i (t=6) x
(new) =
vk=1, (t=7)
vk=1, i (t=6)
0 1 0
x IH=6, i
1
1
0
010
in
where i = 1, 2, . . . , n = 3 ,
IH=6, i
where i = 1, 2, . . , n = 3 ,
ub
vk =1, i (new)
(new) =
0.5 + || vk=1,
(new) ||
et
wk=1,
e.
cs
0 1 0
w.
vk=1, i (t=7)
ww
wk=1, (t=7)
2/3
0.5
010
(d) Update weight matrix W(t) and V(t) for next input vector, time t =7
vj=1
v11
v12
v13
vj=2
Wj=1 Wj=2
W11
W12
W21
W22
W31
W32
55
www.csetube.in
2/3
2/3
www.csetube.in
SC - ART1 Numerical example
I7 = { 1 1 1 } ;
vj=1
Wj=1 Wj=2
W(t=7) =
2/3
v(t=7) =
2/3
y j=1 =
2/3
vj=2
= 2/3
y j=2 =
= 2/3
2/3
in
e.
Decision K = 2
||X(t=7)||
et
VTk=2 X(t=7)
1
1
1
cs
r=
001
ub
Do vigilance test ,
= 0.333
|X(t=7)|
w.
i=1
ww
component by component
*
X K=2 =
Vk=2, i x IH=7, i
= {0 0 1} x {1 1 1}
=
=
0 0 1
X K=2
IH=7
i=1
n
Xi
1/3
IH=7, i
i=1
www.csetube.in
0.333
www.csetube.in
SC - ART1 Numerical example
X K=2
Similarity
is
= 0.333
IH=7
>
is true.
vk=2,
vk =2, i (t=7) x
(new) =
vk=2, (t=8)
vk=2, i (t=7)
0 0 1
x IH=7, i
1
1
1
001
in
where i = 1, 2, . . . , n = 3 ,
IH=7, i
where i = 1, 2, . . , n = 3 ,
ub
vk =2, i (new)
(new) =
0.5 + || vk=2,
(new) ||
et
wk=2,
e.
cs
0 0 1
w.
vk=2, i (t=8)
ww
wk=2, (t=8)
2/3
0.5
001
(d) Update weight matrix W(t) and V(t) for next input vector, time t =8
vj=1
v11
v12
v13
vj=2
Wj=1 Wj=2
W11
W12
W21
W22
W31
W32
57
www.csetube.in
2/3
2/3
www.csetube.in
SC - ART1 Numerical example
Remarks
The
decimal
numbers
1,
2,
3,
4,
5,
6,
given
two
in
the
BCD
clusters (classes)
as even or odd.
Cluster Class
A1 = { X(t=2), X(t=2) }
Cluster Class
by
the :
and
patterns
were arrived
after all, 1 to 7,
e.
in
cs
et
ub
w.
v11
v12
v13
vj=2
ww
Wj=1 Wj=2
W11
vj=1
W12
W21
W22
W31
W32
58
www.csetube.in
2/3
2/3
www.csetube.in
SC - ART2
4.3 ART2
The Adaptive Resonance Theory (ART) developed by Carpenter and
Grossberg designed for clustering binary vectors, called ART1 have been
illustrated in the previous section.
They later developed ART2 for clustering continuous or real valued vectors.
The capability of recognizing analog patterns is significant enhancement to
the system. The differences between ART2 and ART1 are :
The modifications needed to accommodate patterns with continuous-
valued components.
The F1 field of ART2 is more complex because continuous-valued input
in
e.
ub
et
The learning laws of ART2 are simple though the network is complicated.
ww
w.
cs
59
www.csetube.in