Sunteți pe pagina 1din 5

TABU BASED BACK PROPAGATION ALGORITHM FOR PERFORMANCE

IMPROVEMENT IN COMMUNICATION CHANNELS

Prof. J. K. Satapathy*, K.Ratna Subhashini**


Dept. of Electrical Engg, NIT, Rourkela-769008
India
*jks98v@yahoo.com, **subppy@gmail.com

ABSTRACT
This paper presents the equalization of The popularity of Tabu search has grown
communication channels using Artificial Neural significantly in the past few years as a global search
Networks (ANNs). The novel method of training the technique. The roots of the Tabu search go back to
ANNs using Tabu based Back Propagation (BP) 1970’s; it was presented in its present form by Glover
Algorithm is described. The algorithm uses the Tabu [5, 6]. This technique is famous in solving many
Search (TS) to improve the performance of the combinatorial problems like the traveling salesmen
equalizer by escaping from the local minima which problem, design optimization, and the quadratic
occurs in BP algorithm and obtain some superior assignment problem.
global solution. From the results it can be noted that
the proposed algorithm improves the classification In this paper we will apply this technique to the more
capability of the ANNs in classifying the received complex problem of neural network training for it to
data. wok as an equalizer.

KEY WORDS In section 2 we will discuss the process of


Artificial Neural Networks, Tabu Search, equalization. In section 3 we will give a brief
Equalization, Local Minima, Global Solution. introduction to neural networks and its training using
BP algorithm. In section 4 we will discuss the
1. Introduction proposed algorithm of using TS for training the
ANNS. Section 5 is dedicated for discussion on
The Back Propagation (BP) Algorithm revolutionized experimental results.
the use of Artificial Neural Networks (ANNs) in
diverse fields of science and engineering, such as 2. Channel Equalization
pattern recognition, function approximation, system
identification, data mining, time series forecasting The two principal causes of distortion [7] in a digital
etc. these ANNs construct a functional relationship communication channels are Inter Symbol
between input and output patters through the learning Interference (ISI) and the additive Noise. The ISI can
process, and memorize that relationship in the form be characterized by a Finite Impulse Response (FIR)
of weights for later applications [1, 2]. filter [8, 9]. The noise can be external to the system
or external to the system. Hence at the receiver the
This BP algorithm belongs to the family of Gradient- distortion must be compensated in order to
based algorithms and they provide an easy way of reconstruct the transmitted symbols. This process of
supervised learning the multilayered ANNs. The suppressing channel induced distortion is called
method of gradient descent [3] is that a downhill channel equalization. The digital communication
movement in the direction of the negative gradient scenario is illustrated in Fig.1.
will eventually reach the minima of the performance
surface over its parameter space. Since the gradient
techniques converge locally, they often become
trapped at suboptimal solutions [4] depending on the
serendipity of the initial random starting point. Since
obtaining a global solution is the main goal of any
adaptive system, a global search technique seems
more suitable for this difficult nonlinear optimization
problem. Fig. 1. Schematic of a Digital Communication System
The channel can be modeled as an FIR filter with a
transfer function
na 1
i
A( z ) ai z (1)
i 0

where n a is the length of the communication channel


impulse response. The symbol sequence s(k ) and
the channel taps a i can be complex valued. In this
study, however, channels and symbols are restricted
to be real valued. This corresponds to the use of
multilevel pulse amplitude modulation (M-ary PAM)
with a symbol constellation defined by Fig. 2. Neural Network Equalizer.

Concentration on the simpler real case allows us to An equalizer of order m implies that it has m input
highlight the basic principles and concepts. In nodes in its input layer as shown in the figure. An
particular, the case of binary symbols (M = 2) equalizer will have a single node in its output layer.
provides a very useful geometric visualization of The signal received sequentially is allowed to
equalization process. propagate through the hidden layers up to the node in
the output layer [11, 12].
The task of equalizer is to reconstruct the transmitted
symbols as accurately as possible based on the noisy l
The output of the each node y i is the weighted sum
channel observations r (k ) . Various equalizers can be
of outputs of all the nodes in the previous layer and
classified into two categories, namely, the symbol-
affected by the activation function, which here is the
decision equalizer and the sequence-estimation
hyperbolic tangent function given by
equalizer. The later type is hardly used as it is ax
computationally very expensive. The symbol- 1 e
decision equalizers in its initial stages were
( x) ax
(2)
1 e
implemented using Linear Transversal Filters. Later
the advent of ANNs marked the modeling of where a represents the slope of the activation
equalizers which can provide superior performance in
function. Mathematically the forward propagation of
terms of Bit Error Rate (BER) compared to FIR
the neural network is given by [12].
modeling. Nl 1
l
v i (wijl 1 y lj 1 ) (3)
3. Neural Network Equalizer j 1

In this section we will consider the case of neural y il = (v il ) (4)


network equalizer and we shall employ the
definitions and notification introduced therein. The where v i
l
is called the induced local field or
neural network equalizer outperforms the Linear
Transversal filters in terms of BER as most of the activation potential of node i in layer l and N l 1 is
communication channels requires nonlinear decision the number of neurons in the layer (l 1) .
boundary [10]. The structure of the equalizer is
shown in Fig. 2.
Back Propagation Algorithm
In the figure r (k ) represents received signal. The
The BP algorithm consists of two passes through the
structure constitutes three significant parts- one input different layers of the network: a forward pass and a
layer, a set of hidden layers, one output layer. All the backward pass [12]. The forward pass is mentioned
l
nodes are interconnected by the weights wij , where earlier. In the backward pass the error signal, which is
obtained by comparing the output of the node in the
i represents the destination node and j represents output layer with the desired response, is allowed to
the source node. The superscript l gives the layer pass against the direction of synaptic weights and
number.
local gradients
l
at each node is computed using Tabu Area (TA) of any entry in the TL. The
i
Aspiration Criterion (AC) is used to activate the
Nl
l
1
solutions that are tabued but around which there are
j (v lj ) ( i
l 1
w lji ) (5) some superior solutions.
i 1
Due to the lack of availability of desired response at The TBBP
the hidden layers, it is not possible to compute the
error at these nodes. Hence this local gradient is very The TBBP can be divided into two steps - the
important in providing the error measure at the Superficial Search (SS) and the Deep Search (DS). In
hidden nodes. Using these local gradients the the SS we search for the solution which has the
synaptic weights are updated as shown below. higher probability of finding good global solutions.
The DS trains these solutions, found in SS, to find the
Weight learningrate local inputsignal best solution in the neighborhood of the solution.
Correction parameter gradient ofneuron
w lji l 1
j yil The original weight W0 is randomly generated,
where W0 include all the weights of the neural
The weight correction wlji is added to the present network. The SS trains this original weight to a state,
weight after each iteration. W0' a point in the local minima, but is not at the
bottom of the concave [4]. If the point is in TL, then
4. Tabu Based BP (TBBP) Algorithm this is considered to be in a searched concave and is
not considered for DS.
In this section we will fist give a short description to
the Tabu search (TS) algorithm and then discuss our '
proposed algorithm of using TS for Back There may be some Wi s, where i 0,1,2,... which
Propagation. may be in the searched concave but may satisfy
E ( Wi' ) (1 AC ) E ( Wb ) or
Tabu Search
'
E (W )i (1 AC ) E ( Wb ) (6)
Tabu search can be thought as an iterative descent
method. An initial solution is randomly generated and '
a neighborhood around that solution is examined. If a where E ( Wi ) is the sum of error evaluated at the
new solution is found in the neighborhood that is '
superficial state Wi and Wb is the best solution
preferred to the original, then the new solution
found so far.
replaces the old and the process repeats. If no
solution is found that improves upon the old function
The above equation is called the AC and is used to
evaluation, then unlike a gradient descent procedure
activate some of the tabued solutions. The solution
which would stop at that point (a local minima), the
obtained in the SS is further searched, called DS, in
TS algorithm may continue by accepting a new value
its neighborhood.
that is worse than the old value. Therefore a
collection of solutions in a given neighborhood is
generated and the final solution would be the best Steps involved in TBBP
solution found so far for that particular neighborhood.
Here we will discuss about the basic steps involved in
the algorithm. It mainly consists of 7 steps.
To keep from cycling, an additional step is included
that prohibits solutions from recurring (hence the
name Tabu) for a user defined number of solutions. (1) Generate an initial solution Wi , i 0,1,2... .
This Tabu List (TL) is generated by adding the last (2) Superficial search:
solution to the beginning of the list and discarding the (i) The initial weight is trained with BP
oldest solution from the list. During this procedure, '
algorithm to obtain Superficial state Wi
the best solution found so far is retained.
'
and E ( Wi ) .
If the new generated weight is not in the TL, the
(ii) If this solution is in TL and does not
search goes on and the weight is added to the TL. To satisfy AC then go to step (1) to generate
reject any solution, all the weights must be within the new solution.
(iii) Else add the solution to the TL and go to In figure 3 and figure 4 we can see that the
step (3) for DS. performance of the equalizer when trained using
(3) Deep Search: TBBP is far better than that trained using BP.
(i) Deeply search Wi' and get the In table 1 we give the number of wrong decisions
made by the equalizer when trained using BP
''
corresponding deep state W i and get its algorithm and TBBP. The figures are obtained by
'' testing the equalizer with 10 lakh samples.
corresponding E ( Wi ) .
'' ''
(ii) If E ( Wi ) < E ( Wb ) then set Wb = W i 0
0 5 10 15 20 25

''
and E ( Wb ) = E ( W ) .
-0.5
i
' -1
(4) Generate a new solution Wij in the neighbor-
-1.5
' '
hood of Wi and evaluate E (Wij ) . TBBP
BP

BER
-2

(5) If this new superficial solution Wij' is in TL and -2.5

does not satisfy AC then go to step (4) to


-3
'
generate a new neighbor, else add Wij to TL and
-3.5
go to step (6).
' -4
(6) Deeply search Wij similar to step (3) and update SNR

the best solution if needed. Fig. 3. SNR Vs BER plot for BP and TBBP for H1(z)
(7) If j is less than maximum number of
neighborhood searches go to step (4). Else
SNR
finalize Wb as the best solution. 0
0 5 10 15 20 25

-0.5

5. Experimental Results -1

-1.5
In this section we will consider the experimental TBBP
BP
results and compare the performance of the equalizer -2

trained using BP and TBBP.


BER

-2.5

Both the BP and TBBP are written in C and compiled -3

using Microsoft VC++ 6.0. The plots have been taken -3.5

using Microsoft Excel 2003. The channels we have


-4
considered for equalization is
1 2
H1 ( z) 0.9413 0.3841 z 0.5684 z -4.5

3 4
0.4201z 1.0 z -5

Fig. 4. SNR Vs BER plot for BP and TBBP for H2(z)


1 2
H 2 ( z) 0.4084 0.8164 z 0.4084 z
In TBBP, we have searched 50 superficial states and
100 neighbor solutions are generated in the neighbor-
For our experiment we have considered a simple
hood of each superficial state. In BP we have trained
three layered neural network, with decision feedback.
the neural network with 2000 samples.
The equalizer order we have considered is
m 5,3 feedback order nb 4,2 decision delay In table 1, the figures say that the numbers of errors
d 1,2 and in the hidden layer we have considered that occur in decision making by the equalizer are
a single hidden node for appreciating the superior very less when trained by TBBP compared to that
performance of TS even with this very small trained using BP.
structure.
At 20dB SNR the number of errors that occur during
the classification is more than eighty thousand for the
case of BP. But when we consider the case of TBBP,
it can classify with just having slightly greater than 6 Conclusion
100 decision errors. This shows the superior
performance of TS compared to simple BP algorithm. In this paper we propose a novel method of training a
To obtain the similar performance, the BP algorithm neural network using TBBP is proposed. The
needs a more complex structure with 6 nodes in its principal advantages of using the Tabu search is that
hidden layer which is shown in fig 5. it can jump out of the local minima by extending its
search into the global space. Another advantage is
0
0 5 10 15 20 25 that it avoids the searched concaves effectively and
-0.5 hence saving time. This paper presents the efficiency
TBBP 1-1

-1
BP 6-1 of the search, algorithm together with the neural
network in improving the performance of the
-1.5
equalizer even with a simple structure.
BER -2

-2.5
References
-3
[1] S. Haykin, Neural Networks: A
-3.5
Comprehensive Foundation (2nd Ed, Pearson
-4
Education, 2001)
[2] R. P. Lippmann, An Introduction to Computing
-4.5

SNR with Neural Nets, IEEE ASSP Magazine, 1987,


Fig 5 SNR Vs BER plot for BP and TBBP for H1(Z)
4-22.
[3] B. Widrow and SD Stearns, Adaptive Signal
For a BP algorithm, it needs (5 4) 6 6 60 Processing, (Englewood Cliffs, NJ: Prentice Hall
1985).
adjustable parameters (weights) to obtain the similar
[4] Jian Ye, Junfei Qiao, Ming-ai Li, Xiaogang
performance that a TBBP algorithm gives with just
Ruan, A Tabu based neural network learning
(5 4) 1 1 10 weights, i.e. the proposed TBBP algorithm, Neurocomputing, 70, 2007, 875-882.
algorithm reduces the required number of adjustable [5] F. Glover, Tabu Search – Part I, ORSA Journal
parameters from 60 to 10 on Computing, vol.1,1989, 190-206.
[6] F. Glover, Tabu Search – Part II, ORSA Journal
Table 1. Comparison of Number of errors in decision making on Computing, vol.2, 1990, 4-32.
SNR(dB) BP TBBP [7] S. Haykin, Adaptive Filter Theory,(4th Ed,
0 326628 322552 Pearson Education, 2002)
[8] S. Qureshi, Adaptive Equalization, Proc IEEE,
2 298639 292553
1985, 1349-1387.
4 268182 259243
[9] J. G. Proakis, Digital Communications, (New
6 236535 220696 York: McGraw-Hill, 1983).
8 206227 174371 [10] S. Siu, G. J. Gibson and C. F. N. Cowan,
10 175914 118052 Multi-layer perceptron structures applied to
12 148047 62426 adaptive equalizers for data communications,
14 124420 24085 IEEE Proceedings ICASSP Glasgow, Scotland,
May 1989, 1183-1186
16 106366 6637
18 93063 1168
20 83081 111

S-ar putea să vă placă și