Documente Academic
Documente Profesional
Documente Cultură
w i(t)
i=0
)
Wn Xn
where Fh (a) = 1 when a > 0, and = -1 when a <= 0. Step 4: Adapt Weights w i (t+1) = w i (t) + * [d(t) where
w i(t)
i=0
* x i (t) ] * xi (t)
0 < i < n and is the learning rate and usually is a small number ranging from 0 to 1 (typically <= 1/n)
Step 5: Repeat step 2 to 4 Repeat until the desired outputs and the actual network outputs are all equal for all the input vectors of the training set.
Madaline
(Many Adalines)
Learning Procedure
The madaline system has a layer of adaline units that are connected to a single madaline unit. The madaline unit employs a majority vote rule on the outputs of the adaline layer: If more than half of the adalines output a +1, then the madaline units outputs +1 (similarly for -1).
Inputs
Madaline adalines Step 1: Initialize Weights (Wk1..Wkn ) and Threshold (Wk0) Set all weights and threshold of adaline units to small bipolar random values (). Note that k represents the adaline unit k and n represents the number of
inputs to each adaline unit.
Step 2: Present New Input and Desired Output Present input vector x1 , x 2 , .....xn along with the desired output dk (t).
Note: ** x0 is a fixed bias and always set equal to 1. ** dk (t) is the desired output for adaline unit k and takes the value of 1.
w ki (t)
* x i (t)
i=0 = 1 when e > 0, and = -1 when e <= 0. y k (t) is the output from adaline unit k Fh (e)
Step 5: Determine error and update weights If M(t) = desired output, no need to update the weights, Otherwise: In a madaline network, the processing elements in the adaline layer "compete". The winner is the one with the weighted sum nearest to zero, but with the wrong output. Only this processing element is to be adapted.
n
i=0 0 < i < n and is the learning rate and usually is a small number ranging from 0 to 1 (typically <= 1/n) c is the winner adaline unit
w ci (t)
* x i (t) ] * xi (t)
Step 6: Repeat step 2 to 5 Repeat until the desired outputs and the actual network outputs are all equal for all the input vectors of the training set.