Sunteți pe pagina 1din 27

Counter Propagation Neural Network

(CPN)

S. Vivekanandan

Cabin: TT 319A
E-Mail: svivekanandan@vit.ac.in
Mobile: 8124274447
Counter Propagation Network (CPN)

• Developed by Robert Hecht Nielson.


• Combination of SOM and Grossberg outstar.
• Multilayer networks based on combination of input, output
and clustering layers.
• It comparatively reduces the time by one hundred times than
that of BPN.
• It provides solution for those applications which cannot have
larger iterations such as Data compression, Function
approximation, Pattern association, pattern completion and
signal enhancement applications.
• CPN functions as a look up table capable of generalization.

20-04-2017 Dr. S. Vivekanandan Asso. Prof.-SELECT 2


Counter Propagation Network (CPN)

• CPN is trained in two stages


1. The input vectors are clustered on the basis of Euclidean
distances/Dot product method
2. Desired response is obtained by adopting the weights from the cluster
units to the output units.
• Classification of CPN
1. Full counter propagation network
2. Forward only counter propagation network
Advantages
1. Simple
2. Forms a good statistical model
3. Trains rapidly

20-04-2017 Dr. S. Vivekanandan Asso. Prof.-SELECT 3


Full Counter Propagation Network (CPN)

• It possess the generalization capability which allows it to


produce a correct output even the input incomplete/incorrect.
• It can represent large number of vector pairs by constructing a
look up table.
• It operates in two stages. During the first phase the training
vectors are used to form clusters by Euclidean distance and in
the second phase weights are updated.
• It functions in two modes
1. Normal – Input is accepted and the output is produced
2. Training – Input is applied and the weights are adjusted to obtain
desired output

20-04-2017 Dr. S. Vivekanandan Asso. Prof.-SELECT 4


20-04-2017 Dr. S. Vivekanandan Asso. Prof.-SELECT 5
Architecture

• The architecture of a CPN resembles an instar and outstar


model.
• CPN has two input and two output with hidden layer common
to both
• Instar – model connects the input to hidden (first phase)
• Outstar - Model which connects the hidden to the output layer
(second phase)
• Weights are updated in instar and outstar.

20-04-2017 Dr. S. Vivekanandan Asso. Prof.-SELECT 6


20-04-2017 Dr. S. Vivekanandan Asso. Prof.-SELECT 7
20-04-2017 Dr. S. Vivekanandan Asso. Prof.-SELECT 8
• X – Input training vector ( x1….Xn)
• Y – Target output vector (Y1………..Yn)
• Zj – Activation of cluster
• X* _ Approximation to vector x
• Y* _ Approximation to vector y
• Vij – Weight from x to z cluster
• Wjk – Weight from y to z cluster
• T ji – Weight from cluster to x output
• Ujk – Weight from cluster to Y output
• Β, α – Learning rates during Kohonen learning
• a, b – learning rates during Grossberg learning

20-04-2017 Dr. S. Vivekanandan Asso. Prof.-SELECT 9


Step 1 : Initialize the weights, learning rates etc
Step 2 : While stopping condition for phase 1 training is false, perform 3 -8

Step 3 : For each training input pair x:ydo step 4-6


Step 4 : Set X input layer activations to vector x
Set X input layer activations to vector y
Step 5 : Find winning cluster unit using Euclidean distance
Step 6 : Update the weights for winning unit
V ij (new)= V ij (old) + α [ Xi – V ij (old)]
Wkj (new)= Wkj(old) + β [Yk - Wkj(old)]

Step 7 : Reduce learning rate α and β


Step 8 : Test stopping condition for phase 1

20-04-2017 Dr. S. Vivekanandan Asso. Prof.-SELECT 10


Step 9 : While stopping condition for phase 2 training is false, perform step
10-16
Step 10 : For each training input pair x:ydo steps 11 -14
Step 11: Set X input layer activations to vector x
Set X input layer activations to vector y
Step 12: Find winning cluster units using Euclidean distance
Step13 : Update the weights for winning unit
V ij (new)= V ij (old) + α [ Xi – V ij (old)]
Wkj (new)= Wkj(old) + β [Yk - Wkj(old)]
Step 14 : Update the weights from unit Zj to the outputlayer
U jk (new)= V jk (old) + a [ Yk – U jk (old)]
t ji (new)= t ji(old) + b [Xi – t ji (old)]
Step 15 : Leaning rates a and b are to be reduced.
Step 16: Test the stopping condition for phase 2 training

20-04-2017 Dr. S. Vivekanandan Asso. Prof.-SELECT 11


• The winning unit selection is done either by dot product or
Euclidean distance
• The dot product is done by calculating the net input
Z  inj  xiUij   YkWkj
The cluster unit with the largest net input is winner. Here the
vectres should be normalised.
• In Euclidean distance
D( j )  ( xi  Vij ) 2   (Yk  Wkj ) 2
• The square of whose distance from the input vector is smallest
is the winner.

20-04-2017 Dr. S. Vivekanandan Asso. Prof.-SELECT 12


20-04-2017 Dr. S. Vivekanandan Asso. Prof.-SELECT 13
20-04-2017 Dr. S. Vivekanandan Asso. Prof.-SELECT 14
20-04-2017 Dr. S. Vivekanandan Asso. Prof.-SELECT 15
20-04-2017 Dr. S. Vivekanandan Asso. Prof.-SELECT 16
20-04-2017 Dr. S. Vivekanandan Asso. Prof.-SELECT 17
20-04-2017 Dr. S. Vivekanandan Asso. Prof.-SELECT 18
Forward only Counter Propagation Network

• Forward only is a simplified form of full CPN.


• This net may be used if the mapping from x to y is well
defined.
• It is different from full CPN in the sense that, it uses only x-
vectors to form the clusters during the first stage.
• Architecture may look similar to BPN architecture, whereas in
forward only CPN, after competition, only one unit in that
layer will be active and sends a signal to the output layer.
• The Forward only CPN has only one input layer and one
output layer but the training is still performed in two stages.

20-04-2017 Dr. S. Vivekanandan Asso. Prof.-SELECT 19


20-04-2017 Dr. S. Vivekanandan Asso. Prof.-SELECT 20
Step 0 : Initialize the weights, learning rates etc
Step 1 : While stopping condition for phase 1 training is false, perform 2 -7
Step 2 : For each training input pair x:y do step 3-5
Step 3 : Set X input layer activations to vector x
Step 4 : Find winning cluster unit using Euclidean distance
Step 5 : Update the weights for winning unit
V ij (new)= V ij (old) + α [ Xi – V ij (old)]
Step 6 : Reduce learning rate α
Step 7 : Test stopping condition for phase 1

20-04-2017 Dr. S. Vivekanandan Asso. Prof.-SELECT 21


Step 8 : While stopping condition for phase 2 training is false, perform step
9-15
Step 9 : For each training input pair x:y do steps 10 -13
Step 10: Set X input layer activations to vector x
Set y output layer activations to vector y
Step 11: complete winning cluster units using Euclidean distance
Step12 : Update the weights in to unit z; here the value of α is a small constant
V ij (new)= V ij (old) + α [ Xi – V ij (old)]
Step 13 : Update the weights from unit Zj to the output layer
W jk (new) = W jk (old) + a [ Yk – W jk (old)]
Step 14 : Leaning rates a are to be reduced.
Step 15: Test the stopping condition for phase 2 training

Note: The typical values of α = 0.5 to 0.8 and a = 0.1 and 0.6

20-04-2017 Dr. S. Vivekanandan Asso. Prof.-SELECT 22


• The winning unit selection is done either by dot product or
Euclidean distance
• The dot product is done by calculating the net input
Z  inj  xiVij
The cluster unit with the largest net input is winner. Here the
vectres should be normalised.
• In Euclidean distance
D( j )  ( xi  Vij ) 2
• The square of whose distance from the input vector is smallest
is the winner.

20-04-2017 Dr. S. Vivekanandan Asso. Prof.-SELECT 23


20-04-2017 Dr. S. Vivekanandan Asso. Prof.-SELECT 24
20-04-2017 Dr. S. Vivekanandan Asso. Prof.-SELECT 25
20-04-2017 Dr. S. Vivekanandan Asso. Prof.-SELECT 26
20-04-2017 Dr. S. Vivekanandan Asso. Prof.-SELECT 27

S-ar putea să vă placă și