Sunteți pe pagina 1din 3

CpE 320: Application of Neural Networks Adaline ( Adaptive Linear Neural) Learning Procedure

Least Mean Square (LMS) Algorithm


Step 1: Initialize Weights (W1..Wn ) and Threshold (W0) Set all weights and threshold to small bipolar random values (). Step 2: Present New Input and Desired Output Present input vector x1 , x 2 , .....xn along with the desired output d(t). Note: ** x 0 is a fixed bias and always set equal to 1. ** d(t) takes the value of 1 or 0 and 1. "1" Step 3: Calculate Actual Output [y(t)] y(t) = F h( n X1 * x i (t) w1 W0

w i(t)
i=0

)
Wn Xn

where Fh (a) = 1 when a > 0, and = -1 when a <= 0. Step 4: Adapt Weights w i (t+1) = w i (t) + * [d(t) where

w i(t)
i=0

* x i (t) ] * xi (t)

0 < i < n and is the learning rate and usually is a small number ranging from 0 to 1 (typically <= 1/n)

Step 5: Repeat step 2 to 4 Repeat until the desired outputs and the actual network outputs are all equal for all the input vectors of the training set.

CpE 320: Neural Networks: P. Klinkhachorn 1/26/2000

Madaline

(Many Adalines)

Learning Procedure

The madaline system has a layer of adaline units that are connected to a single madaline unit. The madaline unit employs a majority vote rule on the outputs of the adaline layer: If more than half of the adalines output a +1, then the madaline units outputs +1 (similarly for -1).

Inputs

Madaline adalines Step 1: Initialize Weights (Wk1..Wkn ) and Threshold (Wk0) Set all weights and threshold of adaline units to small bipolar random values (). Note that k represents the adaline unit k and n represents the number of
inputs to each adaline unit.

Step 2: Present New Input and Desired Output Present input vector x1 , x 2 , .....xn along with the desired output dk (t).
Note: ** x0 is a fixed bias and always set equal to 1. ** dk (t) is the desired output for adaline unit k and takes the value of 1.

Step 3: Calculate Actual Output [yk (t)] n yk (t) = F h(


where

w ki (t)

* x i (t)

i=0 = 1 when e > 0, and = -1 when e <= 0. y k (t) is the output from adaline unit k Fh (e)

Step 4: Determine actual Madaline output, M(t) M(t) = F majority(y k (t))

CpE 320: Neural Networks: P. Klinkhachorn 1/26/2000

Step 5: Determine error and update weights If M(t) = desired output, no need to update the weights, Otherwise: In a madaline network, the processing elements in the adaline layer "compete". The winner is the one with the weighted sum nearest to zero, but with the wrong output. Only this processing element is to be adapted.
n

w ci (t+1) = w ci (t) + * [d(t) where

i=0 0 < i < n and is the learning rate and usually is a small number ranging from 0 to 1 (typically <= 1/n) c is the winner adaline unit

w ci (t)

* x i (t) ] * xi (t)

Step 6: Repeat step 2 to 5 Repeat until the desired outputs and the actual network outputs are all equal for all the input vectors of the training set.

CpE 320: Neural Networks: P. Klinkhachorn 1/26/2000

S-ar putea să vă placă și