Sunteți pe pagina 1din 99

A

PROJECT REPORT
ON
BER ANALYSIS OF A NOVEL LTI
TECHNIQUE ADOPTING SUI MODELING
FOR CONVENTIONAL AND WAVELET
BASED OFDM
ABSTRACT

Orthogonal Frequency Division Multiplexing (OFDM) and Multiple Input and Multiple
Output (MIMO) are two main techniques employed in 4 th Generation Long Term Evolution
(LTE). In OFDM multiple carriers are used and it provides higher level of spectral efficiency as
compared to Frequency Division Multiplexing (FDM). In OFDM because of loss of
orthogonality between the subcarriers there is intercarrier interference (ICI) and intersymbol
interference (ISI) and to overcome this problem use of cyclic prefixing (CP) is required, which
uses 20% of available bandwidth. Wavelet based OFDM provides good orthogonality and with
its use Bit Error Rate (BER) is improved. Wavelet based system does not require cyclic prefix, so
spectrum efficiency is increased. It is proposed to use wavelet based OFDM at the place of
Discrete Fourier Transform (DFT) based OFDM in LTE. We have compared the BER
performance of wavelets and DFT based OFDM which uses Stanford University Interim (SUI)
channel is replace in place of AWGN channel .
CHAPTER 1

INTROCTION
The revolution of wireless communications certainly was one of the most extraordinary
Changes underlying our contemporary world although we may not realize it, everyday our lives
are profoundly affected by the use of radio waves. Radio and television transmissions, radio-
controlled devices, mobile telephones, satellite communications, and radar and systems of radio
navigation are all examples of wireless communications happening around us. However, less
than a hundred years ago, none of these existed, while the telegraph and telephone were most
common for communication, which required direct wire connection between two places

There mark able advancement in communication of today is the result of an Italian


scientist, Gugliemo Marconi, as he began experiments using radio waves for communication in
1895. These invisible transmittable waves traveled in air, and since the receiving and
transmitting equipment was not connected by wires, the method of communication used was then
recognized as wireless communication. Marconi's first success was in 1897,as he Demonstrated
radio's ability to provide continuous contact with ships sailing the English channel.

By 1920,radio circled the globe and the first radio transmitter was developed and
broadcasted programs for the public. Later, the idea of the radio was adopted by Television, radar
and communication systems, due to the advancement in electronics Equipment and enabled radio
waves to be sent over greater distances

The Cellular Concept:

The world's first cellular network was introduced in the early1980s,using analog radio
Transmission technologies such as AMPS(Advanced Mobile Phone System).To explain The
cellular concept, a diagrammatic representation is presentedinFigure2.1.
In Figure above, each colored cell is viewed as the (approximate) coverage area of a
particular land site. Each cell uses a distinct set of frequencies (channels) and is shown by the
difference in color between cells. However, cells that are far enough a part to avoid co-channel
interference can reuse the same channel set. In the cellular concept, a mobile user is allowed
mobility as a call is "handed off" from one cell to another as the user leaves one cell and enters
another.

The AMPS cellular system was very popular. However, with the gigantic increase in
Subscribers in order of million each year, the AMPS cellular systems began to over loading
capacity and became incapable of delivering sufficient air time to reach user .To overcome the
problem, more effective multiple access techniques were invented.

Bit Error Rate:

In digital transmission, the number of bit errors is the number of received bits of a data
stream over a communication channel that has been altered due
to noise, interference, distortion or bit synchronization errors.

The bit error rate or bit error ratio (BER) is the number of bit errors divided by the total
number of transferred bits during a studied time interval. BER is a unit less performance
measure, often expressed as a percentage.

The bit error probability pe is the expectation value of the BER. The BER can be
considered as an approximate estimate of the bit error probability. This estimate is accurate for a
long time interval and a high number of bit errors.

Example:

As an example, assume this transmitted bit sequence:

0 1 1 0 0 0 1 0 1 1,

And the following received bit sequence:

0 0 1 0 1 0 1 0 0 1,
The number of bit errors (the underlined bits) is in this case 3. The BER is 3 incorrect bits
divided by 10 transferred bits, resulting in a BER of 0.3 or 30%.

Channel:

In telecommunications and computer networking, a communication channel, or channel,


refers either to a physical transmission medium such as a wire, or to a logical connection over
a multiplexed medium such as a radio channel. A channel is used
to convey an information signal, for example a digital bit stream, from one or several senders (or
transmitters) to one or several receivers. A channel has a certain capacity for transmitting
information, often measured by its bandwidth in Hz or its data rate in bits per second.

Chanel models:

A channel can be modelled physically by trying to calculate the physical processes which
modify the transmitted signal. For example in wireless communications the channel can be
modelled by calculating the reflection off every object in the environment. A sequence of random
numbers might also be added in to simulate external interference and/or electronic noise in the
receiver.

Statistically a communication channel is usually modelled as a triple consisting of an


input alphabet, an output alphabet, and for each pair (i, o) of input and output elements a
transition probability p(i, o). Semantically, the transition probability is the probability that
the symbol o is received given that i was transmitted over the channel.

Statistical and physical modelling can be combined. For example in wireless


communications the channel is often modelled by a random attenuation (known as fading) of the
transmitted signal, followed by additive noise. The attenuation term is a simplification of the
underlying physical processes and captures the change in signal power over the course of the
transmission. The noise in the model captures external interference and/or electronic noise in the
receiver. If the attenuation term is complex it also describes the relative time a signal takes to get
through the channel. The statistics of the random attenuation are decided by previous
measurements or physical simulations.

Channel models may be continuous channel models in that there is no limit to how
precisely their values may be defined.

Communication channels are also studied in a discrete-alphabet setting. This corresponds


to abstracting a real world communication system in which the analog->digital and digital-
>analog blocks are out of the control of the designer. The mathematical model consists of a
transition probability that specifies an output distribution for each possible sequence of channel
inputs. In information theory, it is common to start with memory less channels in which the
output probability distribution only depends on the current channel input.

AWGN:

Additive white Gaussian noise (AWGN) is a channel model in which the only impairment
to communication is a linear addition of wideband or white noise with a constant spectral
density (expressed as watts per hertz of bandwidth) and a Gaussian distribution of amplitude.
The model does not account for fading, frequency selectivity, interference, nonlinearity or
dispersion. However, it produces simple and tractable mathematical models which are useful for
gaining insight into the underlying behavior of a system before these other phenomena are
considered.

Wideband Gaussian noise comes from many natural sources, such as the thermal
vibrations of atoms in conductors (referred to as thermal noise or Johnson-Nyquist noise), shot
noise, black body radiation from the earth and other warm objects, and from celestial
sources such as the Sun.

The AWGN channel is a good model for many satellite and deep space communication
links. It is not a good model for most terrestrial links because of multipath, terrain blocking,
interference, etc. However, for terrestrial path modeling, AWGN is commonly used to simulate
background noise of the channel under study, in addition to multipath, terrain blocking,
interference, ground clutter and self interference that modern radio systems encounter in
terrestrial operation.
SUI :

Many readers may be experts in modeling, programming, or higher layers of networking but may
not be familiar with many PHY layer concepts. This tutorial on Channel Models has been
designed for such readers. This information has been gathered from various IEEE and ITU
standards and contributions and published books.

The characteristics of wireless signal changes as it travels from the transmitter antenna to the
receiver antenna. These characteristics depend upon the distance between the two antennas, the
path(s) taken by the signal, and the environment (buildings and other objects) around the path.

The profile of received signal can be obtained from that of the transmitted signal if we have a
model of the medium between the two. This model of the medium is called channel model. In
general, the power profile of the received signal can be obtained by convolving the power profile
of the transmitted signal with the impulse response of the channel. Convolution in time domain is
equivalent to multiplication in the frequency domain. Therefore, the transmitted signal x, after
propagation through the channel H becomes

Y: y (f) =H (f) x (f) +n (f)

Here H(f) is channel response, and n(f) is the noise. Note that x, y, H, and n are all functions of
the signal frequency f.

Stanford University Interim (SUI) Channel Models

This is a set of 6 channel models representing three terrain types and a variety of Doppler
spreads, delay spread and line-of-sight/non-line-of-site conditions that are typical of the
continental US as follows[Erceg2001]

The terrain type A, B,C are same as those defined earlier for Erceg model. The multipath fading
is modeled as a tapped delay line with 3 taps with non-uniform delays. The gain associated with
each tap is characterized by a Rician Distribution and the maximum Doppler frequency.
Table 5.3 : Terrain Type and Doppler Spread for SUI Channel Models
1. Input Mixing Matrix
This part models correlation between input signals if multiple transmitting antennas are used.
2. Tapped Delay Line Matrix
This part models the multipath fading of the channel. The multipath fading is modeled as a
tapped delay line with 3 taps with non-uniform delays. The gain associated with each tap is
characterized by a distribution (Rician with a K-factor > 0, or Raleigh with K-factor = 0) and
the maximum Doppler frequency.

3. Output Mixing Matrix


This part models the correlation between output signals if multiple receiving antennas are
used. Using the above general structure of the SUI Channel and assuming the following
scenario, six SUI channels are constructed which are representative of the real channels.

Scenario for SUI Channel Models

In the following modles, the total channel gain is not normalized. Before using a SUI model, the
specified normalization factors have to be added to each tap to arrive at 0dB total mean power.
The specified Doppler is the maximum frequency parameter.
The Gain Reduction Factor (GRF) is the total mean power reduction for a 30 antenna
compared to an Omni antenna. If 30 antennas are used the specified GRF should be added to
the path loss. Note that this implies that all 3 taps are affected equally due to effects of local
scattering. K-factors have linear values, not dB values.
INTRODUCTION TO WAVELETS

The Wavelet transform is a transform of this type. It provides the time-frequency


representation. (There are other transforms which give this information too, such as short time
Fourier transforms, Wigner distributions, etc.)

Often times a particular spectral component occurring at any instant can be of


particular interest. In these cases it may be very beneficial to know the time intervals these
particular spectral components occur. For example, in EEGs, the latency of an event-related
potential is of particular interest (Event-related potential is the response of the brain to a specific
stimulus like flash-light, the latency of this response is the amount of time elapsed between the
onset of the stimulus and the response).

Wavelet transform is capable of providing the time and frequency information


simultaneously, hence giving a time-frequency representation of the signal.

How wavelet transform works is completely a different fun story, and should be
explained after short time Fourier Transform (STFT) . The WT was developed as an alternative
to the STFT. The STFT will be explained in great detail in the second part of this tutorial. It
suffices at this time to say that he WT was developed to overcome some resolution related
problems of the STFT, as explained in Part II.

To make a real long story short, we pass the time-domain signal from various high
pass and low pass filters, which filter out either high frequency or low frequency portions of the
signal. This procedure is repeated, every time some portion of the signal corresponding to some
frequencies being removed from the signal.

Here is how this works: Suppose we have a signal which has frequencies up to
1000 Hz. In the first stage we split up the signal in to two parts by passing the signal from a high
pass and a low pass filter (filters should satisfy some certain conditions, so-called admissibility
condition) which results in two different versions of the same signal: portion of the signal
corresponding to 0-500 Hz (low pass portion), and 500-1000 Hz (high pass portion).

Then, we take either portion (usually low pass portion) or both, and do the same
thing again. This operation is called decomposition.

Assuming that we have taken the low pass portion, we now have 3 sets of data, each
corresponding to the same signal at frequencies 0-250 Hz, 250-500 Hz, 500-1000 Hz.

Then we take the low pass portion again and pass it through low and high pass
filters; we now have 4 sets of signals corresponding to 0-125 Hz, 125-250 Hz,250-500 Hz, and
500-1000 Hz. We continue like this until we have decomposed the signal to a pre-defined certain
level. Then we have a bunch of signals, which actually represent the same signal, but all
corresponding to different frequency bands. We know which signal corresponds to which
frequency band, and if we put all of them together and plot them on a 3-D graph, we will have
time in one axis, frequency in the second and amplitude in the third axis. This will show us
which frequencies exist at which time ( there is an issue, called "uncertainty principle", which
states that, we cannot exactly know what frequency exists at what time instance , but we can only
know what frequency bands exist at what time intervals.

The uncertainty principle, originally found and formulated by Heisenberg, states


that, the momentum and the position of a moving particle cannot be known simultaneously. This
applies to our subject as follows:

The frequency and time information of a signal at some certain point in the time-
frequency plane cannot be known. In other words: We cannot know what spectral component
exists at any given time instant. The best we can do is to investigate what spectral components
exist at any given interval of time. This is a problem of resolution, and it is the main reason why
researchers have switched to WT from STFT. STFT gives a fixed resolution at all times, whereas
WT gives a variable resolution as follows:

Higher frequencies are better resolved in time, and lower frequencies are better
resolved in frequency. This means that, a certain high frequency component can be located better
in time (with less relative error) than a low frequency component. On the contrary, a low
frequency component can be located better in frequency compared to high frequency component.
Below , are some examples of continuous wavelet transform:
Let's take a sinusoidal signal, which has two different frequency components at two
different times:
Note the low frequency portion first, and then the high frequency.

Figure 6

The continuous wavelet transform of the above signal:


figure 7

Note however, the frequency axis in these plots is labeled as scale . The concept
of the scale will be made clearer in the subsequent sections, but it should be noted at this time that
the scale is inverse of frequency. That is, high scales correspond to low frequencies, and low
scales correspond to high frequencies. Consequently, the little peak in the plot corresponds to the
high frequency components in the signal, and the large peak corresponds to low frequency
components (which appear before the high frequency components in time) in the signal.

You might be puzzled from the frequency resolution shown in the plot, since it
shows good frequency resolution at high frequencies. Note however that, it is the good scale
resolution that looks good at high frequencies (low scales), and good scale resolution means poor
frequency resolution and vice versa.

The Fourier transform:

In 19th century (1822*, to be exact, but you do not need to know the exact time.
Just trust me that it is far before than you can remember), the French mathematician J. Fourier,
showed that any periodic function can be expressed as an infinite sum of periodic complex
exponential functions. Many years after he had discovered this remarkable property of (periodic)
functions, his ideas were generalized to first non-periodic functions, and then periodic or non-
periodic discrete time signals. It is after this generalization that it became a very suitable tool for
computer calculations. In 1965, a new algorithm called fast Fourier Transform (FFT) was
developed and FT became even more popular.

Now let us take a look at how Fourier transform works:


FT decomposes a signal to complex exponential functions of different frequencies. The way it
does this, is defined by the following two equations:

In the above equation, t stands for time, f stands for frequency, and x denotes
the signal at hand. Note that x denotes the signal in time domain and the X denotes the signal in
frequency domain. This convention is used to distinguish the two representations of the signal.
FT Equation (1) is called the Fourier transform of x(t), and FT equation (2) is called the inverse
Fourier transform of X(f), which is x(t).

For those of you who have been using the Fourier transform are already
familiar with this. Unfortunately many people use these equations without knowing the
underlying principle.

The signal x(t), is multiplied with an exponential term, at some certain


frequency "f" , and then integrated over ALL TIMES !!! Note that the exponential term in FT
Eqn. (1) can also be written as:

Cos(2.pi.f.t)+j.Sin(2.pi.f.t).......(3)

The above expression has a real part of cosine of frequency f, and an imaginary
part of sine of frequency f. So what we are actually doing is, multiplying the original signal with
a complex expression which has sines and cosines of frequency f. Then we integrate this product.
In other words, we add all the points in this product. If the result of this integration (which is
nothing but some sort of infinite summation) is a large value, then we say that : the signal x(t),
has a dominant spectral component at frequency "f". This means that, a major portion of this
signal is composed of frequency f. If the integration result is a small value, than this means that
the signal does not have a major frequency component of f in it. If this integration result is zero,
then the signal does not contain the frequency "f" at all.

It is of particular interest here to see how this integration works: The signal is
multiplied with the sinusoidal term of frequency "f". If the signal has a high amplitude
component of frequency "f", then that component and the sinusoidal term will coincide, and the
product of them will give a (relatively) large value. This shows that, the signal "x", has a major
frequency component of "f".

However, if the signal does not have a frequency component of "f", the
product will yield zero, which shows that, the signal does not have a frequency component of "f".
If the frequency "f", is not a major component of the signal "x(t)", then the product will give a
(relatively) small value. This shows that, the frequency component "f" in the signal "x", has a
small amplitude, in other words, it is not a major component of "x".

Now, note that the integration in the transformation equation (FT Eqn. 1) is
over time. The left hand side of (1), however, is a function of frequency. Therefore, the integral
in (1), is calculated for every value of f.

The information provided by the integral, corresponds to all time instances,


since the integration is from minus infinity to plus infinity over time. It follows that no matter
where in time the component with frequency "f" appears, it will affect the result of the
integration equally as well. In other words, whether the frequency component "f" appears at time
t1 or t2 , it will have the same effect on the integration. This is why Fourier transform is not
suitable if the signal has time varying frequency, i.e., the signal is non-stationary. If only the
signal has the frequency component "f" at all times (for all "t" values), then the result obtained
by the Fourier transform makes sense.
Note that the Fourier transform tells whether a certain frequency component exists
or not. This information is independent of where in time this component appears. It is therefore
very important to know whether a signal is stationary or not, prior to processing it with the FT.

The short term fourier transform:

There is only a minor difference between STFT and FT. In STFT, the signal is divided
into small enough segments, where these segments (portions) of the signal can be assumed to be
stationary. For this purpose, a window function "w" is chosen. The width of this window must be
equal to the segment of the signal where its stationarity is valid.

This window function is first located to the very beginning of the signal. That is, the
window function is located at t=0. Let's suppose that the width of the window is "T" s. At this
time instant (t=0), the window function will overlap with the first T/2 seconds (I will assume that
all time units are in seconds). The window function and the signal are then multiplied. By doing
this, only the first T/2 seconds of the signal is being chosen, with the appropriate weighting of
the window (if the window is a rectangle, with amplitude "1", then the product will be equal to
the signal). Then this product is assumed to be just another signal, whose FT is to be taken. In
other words, FT of this product is taken, just as taking the FT of any signal.

The result of this transformation is the FT of the first T/2 seconds of the signal. If this
portion of the signal is stationary, as it is assumed, then there will be no problem and the
obtained result will be a true frequency representation of the first T/2 seconds of the signal.

The next step, would be shifting this window (for some t1 seconds) to a new
location, multiplying with the signal, and taking the FT of the product. This procedure is
followed, until the end of the signal is reached by shifting the window with "t1" seconds
intervals.

The following definition of the STFT summarizes all the above explanations in one line:
Please look at the above equation carefully. x(t) is the signal itself, w(t) is the
window function, and * is the complex conjugate. As you can see from the equation, the STFT of
the signal is nothing but the FT of the signal multiplied by a window function.

For every t' and f a new STFT coefficient is computed (Correction: The "t" in the
parenthesis of STFT should be "t'". I will correct this soon. I have just noticed that I have
mistyped it).

The following figure (fig. 8) may help you to understand this a little better:
Fig.8

The Gaussian-like functions in color are the windowing functions. The red one
shows the window located at t=t1', the blue shows t=t2', and the green one shows the window
located at t=t3'. These will correspond to three different FTs at three different times. Therefore,
we will obtain a true time-frequency representation (TFR) of the signal.

Probably the best way of understanding this would be looking at an example. First
of all, since our transform is a function of both time and frequency (unlike FT, which is a
function of frequency only), the transform would be two dimensional (three, if you count the
amplitude too). Let's take a non-stationary signal, such as the following one in figure 9:
Figure 9

In this signal, there are four frequency components at different times. The interval
0 to 250 ms is a simple sinusoid of 300 Hz, and the other 250 ms intervals are sinusoids of 200
Hz, 100 Hz, and 50 Hz, respectively. Apparently, this is a non-stationary signal. Now, let's look
at its STFT:

Figure 10

As expected, this is two dimensional plots (3 dimensional, if you count the


amplitude too). The "x" and "y" axes are time and frequency, respectively. Please, ignore the
numbers on the axes, since they are normalized in some respect, which is not of any interest to us
at this time. Just examine the shape of the time-frequency representation.

First of all, note that the graph is symmetric with respect to midline of the
frequency axis. Remember that, although it was not shown, FT of a real signal is always
symmetric, since STFT is nothing but a windowed version of the FT, it should come as no
surprise that STFT is also symmetric in frequency. The symmetric part is said to be associated
with negative frequencies, an odd concept which is difficult to comprehend, fortunately, it is not
important; it suffices to know that STFT and FT are symmetric.

What is important, are the four peaks; note that there are four peaks
corresponding to four different frequency components. Also note that, unlike FT, these four
peaks are located at different time intervals along the time axis . Remember that the original
signal had four spectral components located at different times.

You may wonder, since STFT gives the TFR of the signal, why do we need the
wavelet transform. The implicit problem of the STFT is not obvious in the above example. Of
course, an example that would work nicely was chosen on purpose to demonstrate the concept.

The problem with STFT is the fact whose roots go back to what is known as the
Heisenberg Uncertainty Principle. This principle originally applied to the momentum and
location of moving particles, can be applied to time-frequency information of a signal. Simply,
this principle states that one cannot know the exact time-frequency representation of a signal,
i.e., one cannot know what spectral components exist at what instances of times. What one can
know is the time intervals in which certain band of frequencies exist, which is a resolution
problem.

The problem with the STFT has something to do with the width of the window
function that is used. To be technically correct, this width of the window function is known as the
support of the window. If the window function is narrow, than it is known as compactly
supported. This terminology is more often used in the wavelet world, as we will see later.
Recall that in the FT there is no resolution problem in the frequency domain, i.e.,
we know exactly what frequencies exist; similarly we there is no time resolution problem in the
time domain, since we know the value of the signal at every instant of time. Conversely, the time
resolution in the FT, and the frequency resolution in the time domain are zero, since we have no
information about them. What gives the perfect frequency resolution in the FT is the fact that the
window used in the FT is its kernel, the exp{jwt} function, which lasts at all times from minus
infinity to plus infinity. Now, in STFT, our window is of finite length, thus it covers only a
portion of the signal, which causes the frequency resolution to get poorer. What I mean by
getting poorer is that, we no longer know the exact frequency components that exist in the signal,
but we only know a band of frequencies that exist:

In FT, the kernel function, allows us to obtain perfect frequency resolution,


because the kernel itself is a window of infinite length. In STFT is window is of finite length,
and we no longer have perfect frequency resolution. You may ask, why don't we make the length
of the window in the STFT infinite, just like as it is in the FT, to get perfect frequency
resolution? Well, than you loose all the time information, you basically end up with the FT
instead of STFT. To make a long story real short, we are faced with the following dilemma:

If we use a window of infinite length, we get the FT, which gives perfect
frequency resolution, but no time information. Furthermore, in order to obtain the stationarity,
we have to have a short enough window, in which the signal is stationary. The narrower we make
the window, the better the time resolution, and better the assumption of stationarity, but poorer
the frequency resolution:

Narrow window ===>good time resolution, poor frequency resolution.


Wide window ===>good frequency resolution, poor time resolution.

The continuous wavelet transform:

The continuous wavelet transform was developed as an alternative approach to the short time
Fourier transforms to overcome the resolution problem. The wavelet analysis is done in a similar
way to the STFT analysis, in the sense that the signal is multiplied with a function, {\it the
wavelet}, similar to the window function in the STFT, and the transform is computed separately
for different segments of the time-domain signal. However, there are two main differences
between the STFT and the CWT:

1. The Fourier transforms of the windowed signals are not taken, and therefore
single peak will be seen corresponding to a sinusoid, i.e., negative frequencies are not computed.

2. The width of the window is changed as the transform is computed for every
single spectral component, which is probably the most significant characteristic of the wavelet
transform.

The continuous wavelet transform is defined as follows

Equation 5.1.1

As seen in the above equation 5.1.1, the transformed signal is a function of two variables, tau and
s , the translation and scale parameters, respectively. psi(t) is the transforming function, and it is
called the mother wavelet . The term mother wavelet gets its name due to two important
properties of the wavelet analysis.

The term wavelet means a small wave. The smallness refers to the condition that
this (window) function is of finite length (compactly supported). The wave refers to the
condition that this function is oscillatory. The term mother implies that the functions with
different region of support that are used in the transformation process are derived from one main
function, or the mother wavelet. In other words, the mother wavelet is a prototype for generating
the other window functions.

The term translation is used in the same sense as it was used in the STFT; it is
related to the location of the window, as the window is shifted through the signal. This term,
obviously, corresponds to time information in the transform domain. However, we do not have a
frequency parameter, as we had before for the STFT. Instead, we have scale parameter which is
defined as $1/frequency$. The term frequency is reserved for the STFT.

The Scale:
The parameter scale in the wavelet analysis is similar to the scale used in maps.
As in the case of maps, high scales correspond to a non-detailed global view (of the signal), and
low scales correspond to a detailed view. Similarly, in terms of frequency, low frequencies (high
scales) correspond to a global information of a signal (that usually spans the entire signal),
whereas high frequencies (low scales) correspond to a detailed information of a hidden pattern in
the signal (that usually lasts a relatively short time). Cosine signals corresponding to various
scales are given as examples in the following figure .

Figure 12
Fortunately in practical applications, low scales (high frequencies) do not last
for the entire duration of the signal, unlike those shown in the figure, but they usually appear
from time to time as short bursts, or spikes. High scales (low frequencies) usually last for the
entire duration of the signal.

Scaling, as a mathematical operation, either dilates or compresses a signal.


Larger scales correspond to dilated (or stretched out) signals and small scales correspond to
compressed signals. All of the signals given in the figure are derived from the same cosine
signal, i.e., they are dilated or compressed versions of the same function. In the above figure,
s=0.05 is the smallest scale, and s=1 is the largest scale.

In terms of mathematical functions, if f(t) is a given function f(st) corresponds


to a contracted (compressed) version of f(t) if s > 1 and to an expanded (dilated) version of f(t) if
s<1.

However, in the definition of the wavelet transform, the scaling term is used
in the denominator, and therefore, the opposite of the above statements holds, i.e., scales s > 1
dilates the signals whereas scales s < 1 , compresses the signal.

The Discrete Wavelet Transform:

The Wavelet Series is just a sampled version of CWT and its computation may
consume significant amount of time and resources, depending on the resolution required. The
Discrete Wavelet Transform (DWT), which is based on sub-band coding, is found to yield a fast
computation of Wavelet Transform. It is easy to implement and reduces the computation time
and resources required.

The foundations of DWT go back to 1976 when techniques to decompose


discrete time signals were devised. Similar work was done in speech signal coding which was
named as sub-band coding. In 1983, a technique similar to sub-band coding was developed
which was named pyramidal coding. Later many improvements were made to these coding
schemes which resulted in efficient multi-resolution analysis schemes.
In CWT, the signals are analyzed using a set of basis functions which relate to
each other by simple scaling and translation. In the case of DWT, a time-scale representation of
the digital signal is obtained using digital filtering techniques. The signal to be analyzed is
passed through filters with different cutoff frequencies at different scales.

Multi resolution analysis:

Although the time and frequency resolution problems are results of a physical
phenomenon (the Heisenberg uncertainty principle) and exist regardless of the transform used, it
is possible to analyze any signal by using an alternative approach called the multi resolution
analysis (MRA).

MRA, as implied by its name, analyzes the signal at different frequencies with different
resolutions. Every spectral component is not resolved equally as was the case in the STFT.MRA
is designed to give good time resolution and poor frequency resolution at high frequencies and
good frequency resolution and poor time resolution at low frequencies. This approach makes
sense especially when the signal at hand has high frequency components for short durations and
low frequency components for long durations. Fortunately, the signals that are encountered in
practical applications are often of this type. For example, the following figure 11 shows a signal
of this type. It has a relatively low frequency component throughout the entire signal and
relatively high frequency components for a short duration somewhere around the middle.
Figure 11

Multi-Resolution Analysis using Filter Banks:

Filters are one of the most widely used signal processing functions. Wavelets can be
realized by iteration of filters with rescaling. The resolution of the signal, which is a measure of
the amount of detail information in the signal, is determined by the filtering operations, and the
scale is determined by up sampling and down sampling (sub sampling) operations.

The DWT is computed by successive low pass and high pass filtering of the discrete
time-domain signal as shown in figure 13. This is called the Mallat algorithm or Mallat-tree
decomposition. Its significance is in the manner it connects the continuous-time muti resolution
to discrete-time filters. In the figure13, the signal is denoted by the sequence x[n], where n is an
integer. The low pass filter is denoted by G 0 while the high pass filter is denoted by H 0. At each
level, the high pass filter produces detail information; d[n], while the low pass filter associated
with scaling function produces coarse approximations, a[n].
Fig
ure 13 Three-level wavelet decomposition tree

At each decomposition level, the half band filters produce signals spanning only
half the frequency band. This doubles the frequency resolution as the UN certainty in frequency
is reduced by half. In accordance with Nyquists rule if the original signal has highest frequency
of , which requires a sampling frequency of 2 radians, then it now has a highest frequency of
/2 radians. It can now be sampled at a frequency of radians thus discarding half the samples
with no loss of information. This decimation by 2 halves the time resolution as the entire signal
is now represented by only half the number of samples. Thus, while the half band low pass
filtering removes half of the frequencies and thus halves the resolution, the decimation by 2
doubles the scale. With this approach, the time resolution becomes arbitrarily good at high
frequencies, while the frequency resolution becomes arbitrarily good at low frequencies. The
filtering and decimation process is continued until the desired level is reached. The maximum
number of levels depends on the length of the signal.

The DWT of the original signal is then obtained by concatenating all the
coefficients, a[n] and d[n], starting from the last level of decomposition.
Figure 14 Three-level wavelet reconstruction tree.

Figure 15 shows the reconstruction of the original signal from the wavelet
coefficients. Basically, the reconstruction is the reverse process of decomposition. The
approximation and detail coefficients at every level are up sampled by two, passed through the
low pass and high pass synthesis filters and then added. This process is continued through the
same number of levels as in the decomposition process to obtain and H the original signal. The
Mallat algorithm works equally well if the analysis filters, G 0 and H0, are exchanged with the
synthesis filters, G1 1.

Stationary wavelet transforms:

The discrete stationary wavelet transform (SWT) is a un decimated version of


DWT. The main idea is to average several detailed co-efficient which are obtained by
decomposition of the input signal without downs sampling. This approach can be interpreted as a
repeated application of the standard DWT method for different time shifts.

The Stationary wavelet transform (SWT) is similar to the dwt except the signal is
never sub sampled and instead the filters are up sampled at each level of decomposition.
A 3 level SWT filter bank

Each level's filters are up-sampled versions of the previous.

SWT filters The SWT is an inherently redundant scheme as each set of


coefficients contains the same number of samples as the input so for a decomposition of N
levels there are a redundancy of 2N.

One-Stage Filtering: Approximations and Details:

For many signals, the low-frequency content is the most important part. It is what
gives the signal its identity. The high-frequency content on the other hand imparts flavor or
nuance. Consider the human voice. If you remove the high-frequency components, the voice
sounds different but you can still tell whats being said. However, if you remove enough of the
low-frequency components, you hear gibberish. In wavelet analysis, we often speak of
approximations and details. The approximations are the high-scale, low-frequency components
of the signal. The details are the low-scale, high-frequency components.
The filtering process at its most basic level looks like this:

Figure16: Filtering Process

The original signal S passes through two complementary filters and emerges as two signals.

Unfortunately, if we actually perform this operation on a real digital signal, we


wind up with twice as much data as we started with. Suppose, for instance that the original signal
S consists of 1000 samples of data. Then the resulting signals will each have 1000 samples, for a
total of 2000.

These signals A and D are interesting, but we get 2000 values instead of the
1000 we had. There exists a more subtle way to perform the decomposition using wavelets. By
looking carefully at the computation, we may keep only one point out of two in each of the two
2000-length samples to get the complete information. This is the notion of own sampling. We
produce two sequences called cA and cD.
Figure 17: Sampling

The process on the right which includes down sampling produces DWT
Coefficients. To gain a better appreciation of this process lets perform a one-stage discrete
wavelet transform of a signal. Our signal will be a pure sinusoid with high- frequency noise
added to it.

Here is our schematic diagram with real signals inserted into it:

Figure 18: Schematic Diagram


Multiple-Level Decomposition:

The decomposition process can be iterated, with successive approximations being


decomposed in turn, so that one signal is broken down into many lower resolution components.
This is called the wavelet decomposition tree.

figure19: Wavelet Decomposition Tree

Looking at a signals wavelet decomposition tree can yield valuable information.


Figure 20: Wavelet Decomposition Tree

Number of Levels:

Since the analysis process is iterative, in theory it can be continued indefinitely.


In reality, the decomposition can proceed only until the individual details consist of a single
sample or pixel. In practice, youll select a suitable number of levels based on the nature of the
signal, or on a suitable criterion such as entropy.

Wavelet Reconstruction:

The discrete wavelet transform can be used to analyze, or decompose, signals and
images. This process is called decomposition or analysis. The other half of the story is how those
components can be assembled back into the original signal without loss of information. This
process is called reconstruction, or synthesis. The mathematical manipulation that effects
synthesis is called the inverse discrete wavelet transform (IDWT). To synthesize a signal using
Wavelet Toolbox software, we reconstruct it from the wavelet coefficients:

Where wavelet analysis involves filtering and downsampling, the wavelet reconstruction process
consists of upsampling and filtering. Upsampling is the process of lengthening a signal
component by inserting zeros between samples:
The toolbox includes commands, like idwt and waverec, that perform single-level or multilevel
reconstruction, respectively, on the components of one-dimensional signals. These commands
have their two-dimensional analogs, idwt2 and waverec2.

Reconstruction Filters:

The filtering part of the reconstruction process also bears some discussion, because it is
the choice of filters that is crucial in achieving perfect reconstruction of the original signal. The
downsampling of the signal components performed during the decomposition phase introduces a
distortion called aliasing. It turns out that by carefully choosing filters for the decomposition and
reconstruction phases that are closely related (but not identical), we can cancel out the effects
of aliasing. The low- and high-pass decomposition filters (L and H), together with their
associated reconstruction filters (L' and H'), form a system of what is called quadrature mirror
filters:
Reconstructing Approximations and Details

We have seen that it is possible to reconstruct our original signal from the coefficients of the
approximations and details.

It is also possible to reconstruct the approximations and details themselves from their
coefficient vectors. As an example, lets consider how we would reconstruct the first-level
approximation A1 from the coefficient vector cA1. We pass the coefficient vector cA1 through
the same process we used to reconstruct the original signal. However, instead of combining it
with the level-one detail cD1, we feed in a vector of zeros in place of the detail coefficients
vector:

The process yields a reconstructed approximation A1, which has the same length as the original
signal S and which is a real approximation of it. Similarly, we can reconstruct the first-level
detail D1, using the analogous process:
The reconstructed details and approximations are true constituents of the original signal. In fact,
we find when we combine them that

Note that the coefficient vectors cA1 and cD1 because they were produced by downsampling
and are only half the length of the original signal cannot directly be combined to reproduce
the signal. It is necessary to reconstruct the approximations and details before combining them.
Extending this technique to the components of a multilevel analysis, we find that similar
relationships hold for all the reconstructed signal constituents. That is, there are several ways to
reassemble the original signal:
Wavelet families:
Several families of wavelets that have proven to be especially useful .some wavelet
Families are
Haar
Daubachies
Biorthogonal
Coiflets
Symlets
Morlet
Mexicanhat
Meyer
Other real wavelets
complex wavelets
Haar

Any discussion of wavelets begins with Haar wavelet, the first and simplest. Haar
wavelet is discontinuous, and resembles a step function. It represents the same wavelet as
Daubechies db1.

Daubechies
Ingrid Daubechies, one of the brightest stars in the world of wavelet research, invented
what are called compactly supported orthonormal wavelets thus making discrete wavelet
analysis practicable. The names of the Daubechies family wavelets are written dbN, where N is
the order, and db the surname of the wavelet. The db1 wavelet, as mentioned above, is the
same as Haar wavelet. Here are the wavelet functions psi of the next nine members of the family:

Biorthogonal

This family of wavelets exhibits the property of linear phase, which is needed for signal
and image reconstruction. By using two wavelets, one for decomposition (on the left side) and
the other for reconstruction (on the right side) instead of the same single one, interesting
properties are derived.
Coiflets

Built by I. Daubechies at the request of R. Coifman. The wavelet function has 2N


moments equal to 0 and the scaling function has 2N-1 moments equal to 0. The two functions
have a support of length 6N-1. You can obtain a survey of the main properties of this family by
typing waveinfo('coif') from the MATLAB command line
.

Symlets

The symlets are nearly symmetrical wavelets proposed by Daubechies as modifications to the db
family. The properties of the two wavelet families are similar. Here are the wavelet functions psi.
Morlet

This wavelet has no scaling function, but is explicit.

Mexican Hat

This wavelet has no scaling function and is derived from a function that is proportional to
the second derivative function of the Gaussian probability density function.
Meyer

The Meyer wavelet and scaling function are defined in the frequency domain.

Other Real Wavelets

Some other real wavelets are available in the toolbox:


Reverse Biorthogonal
Gaussian derivatives family
FIR based approximation of the Meyer wavelet
Complex Wavelets

Some complex wavelet families are available in the toolbox:


Gaussian derivatives
Morlet
Frequency B-Spline
Shannon
CHAPTER-2
LITERATURE SURVEY
MULTIPLE ACCESSES DEFINITION

Satellites are always built with the intension that many users will share the bandwidth.
The ability of the satellite to carry many signals at the same time is known as multiple access. It
allows the communication capacity of the satellite to be shared among a large number of earth
stations. The signals that earth stations transmit to a satellite may differ widely in there character
but they can be sent through the same satellite using multiple access and multiplexing
techniques.

Multiplexing is the process of combining multiple signals into a single signal so that it
can be processed by a single amplifier or transmitted over a single radio channel. The
corresponding technique the recovers the individual signal back is called as demultiplexing.

The distinction between multiplexing and multiple access is that multiplexing is done at
one location whereas multiple access refers to the signals from a number of different geographic
locations.

MULTIPLE ACCESS TECHNIQUES

Multiplexing is done at the earth stations then after modulating the signals at the earth
stations it is transmitted to the satellite. At the satellite the signals will share the satellite
transponder by different multiple access techniques. There are basically three multiple access
techniques. They are:

1. Frequency division multiple access (FDMA)

2. Time division multiple access (TDMA)

3. Code division multiple access (CDMA)

The reason of using such techniques is to allow all users of a cellular system to be able to
Share the available bandwidth in a cellular system simultaneously.

FREQUENCY DIVISION MULTIPLE ACCESS (FDMA)

Frequency division multiple access is a technique in which all the earth stations share the
satellite transponder bandwidth at the same time but each earth station is allocated a unique
frequency slot. Each station transmits its signals within that piece of frequency spectrum.

FDMA was the first multiple-access technique deployed for cellular systems, the AMPS
Cellular systems. In Figure below, it can be seen that each user is assigned a unique channel
(frequency band). In other words, no other user can share the same frequency channel during the
period of the Call using FDD (Frequency Division Duplexing).

FDMA is an analog FM multiple-access technique, which transmission for any user is


continuous. FDD is a frequency domain duplexing technique, that is, FDD provides two distinct
frequency bands for forward (base station to mobile) and reverse (mobile to base station) for
every user.

Figure FDMA channels

TIME DIVISION MULTIPLE ACCESS (TDMA)


Time division multiple access is a technique in which each earth station is allocated a
unique time slot at the satellite so that signals pass through the transponder sequentially. TDMA
causes delay in the transmission.

TDMA is a digital multiple-access technique, which divides the radio spectrum in to time
slots (channels), and only one user is allowed to either transmit or receive in each slot. In Figure
below, it can be seen that each user occupies a particular time slot with in every frame, where a
frame comprises of N times slots.

In TDMA, time domain duplexing (TDD) and FDD are the two possible duplexing
Techniques can be used. In TDMA/TDD systems, multiple users share the same frequency
channel by taking turns in the time domain .TDD is a duplexing technique which sectors time
instead of frequency To provide both the forward and reverse link channels. Each user is
assigned a forward and reverse time slot in each frame and only allowed to access the radio
channel in these assigned slots. Furthermore, time slots in a frame are divided equally between
the forward and reverse link channels.

On the other hand, in TDMA /FDD systems, an identical or similar frame structure is
used Entirely for either forward or reverse transmission, but in this case the carrier frequencies
are Different for the forward and reverse links In TDMA systems, data is transmitted in a buffer-
and burst method, which means Transmission for any user is non-continuous. Digital data and
digital modulation is used With TDMA, leading to data being transmitted in discrete packets
Figure: TDMA channels

CODE DIVISION MULTIPLE ACCESS (CDMA)

Code division multiple access is a technique in which all the earth stations transmit
signals to the satellite on the same frequency and at the same time. The earth station transmits the
coded spectrum which is then separated or decoded at the receiving earth station.

Due to the daily demand of higher user capacity, FDMA and TDMA systems were unable
to with stand high system over load and system problems. In particular, in FDMA systems, non-
linear effects were observed when the power amplifiers or the power combiners operate at or
near saturation for maximum power efficiency and adjacent-channel interference occurs.

Developed by Qualcomm Incin1995, CDMA is a recently developed digital multiple


access technique. CDMA or Code Division Multiple Access was standardized by the
Telecommunications Industry Association (TIA) as an Interim Standard (IS-95). Compared to
TDMA and FDMA, CDMA is superior in terms of user capacity, signal quality, security, power
consumption and reliability. It enables allocation of data in increments of 8 kilo bits per second
with in the1.25MHz CDMA channel bandwidth.

As a bench mark, CDMA is able to offer up to 6 times the capacity of TDMA, and
about7-10 Times the capacity of analog technologies such as AMPS and FDMA, and now holds
over 600 million subscribers worldwide.

In CDMA systems, all user so that the system are allowed to use the same carrier
frequency band May transmit simultaneously as depicted inFigure2.4,through the use of Direct-
Sequence Spread Spectrum. Therefore, CDMA is also known as DSMA Direct Spread Multiple
Access.
Figure : CDMA channels

A narrow band message is multiplied with a much larger bandwidth signal, which is
called the Spreading signal, which is uncorrelated to the message signal. Then the transmitted
signal will have a bandwidth, which is essentially equal to the bandwidth of the spreading signal
the spreading signal is comprised of symbols that are defined by a pseudorandom sequence,
which is known to both the transmitter and the receiver. These symbols are called chips.

Typically the chip rate is much greater than the symbol rate of the original data sequence.
The pseudorandom chip sequence is also known as the PN (Pseudo Noise) sequence as the
power spectral density of the pseudorandom chip sequence looks approximately like white noise.

Each user in a CDMA system is given its own PN sequence for data transmission, which is
approximately orthogonal to other users PN sequences in system. Due to the orthogonality
Between PN sequences, the problem of multi-user interference in the system is eliminated.
FREQUENCY HOPPED MULTIPLE ACCESS (FHMA)

Like CDMA, Frequency Hopped Multiple Access (FHMA) is another spread spectrum
Technique which uses long PN codes or signal spreading and de spreading. FHMA is a digital
multiple access technique, in which the carrier frequency of each user transmitting varies in a
pseudo random fashion with in the system's available bandwidth. As a spread spectrum
technique, FHMA allows users of the system to transmit simultaneously.

FHMA allows multiple users to access the system spectrum simultaneously, as each user
occupies a specific non-over lapping portion of the spectrum determined by their unique long PN
code at a particular instance of time. In contrast, each CDMA user is allocated the same portion
of the spectrum all the time.

The major advantage of using FHMA over CDMA is the level of security which it
provides, especially when a large number of channels are used.

CDMA

CDMA system is a multi-user spread spectrum system that eliminates the frequency reuse
problem in cellular systems. Unlike TDMA and FDMA systems, where user signals never
overlap in either the time or the frequency domains, respectively, a CDMA system allows
transmissions at the same time while using the same frequency. For example, in the first
widespread commercial CDMA system, The mechanism separating the users in a CDMA system
consists of assigning a unique code that modulates the signal from each user; the number of
unique codes in a CDMA link is equal to the number of active users. The code modulating the
users signal is also called a spreading code, spreading sequence, or chip sequence.

CDMA Generation

Transmitter

The transmitter interoperations comprise of convolution encoding and repetition, block


interleaving, long PN sequence, data scrambling, Walsh coding and quadrature modulation.

The Convolution encoder performs error-control coding by encoding the incoming bit
stream information. This allows error correction at the receiver ,and hence improves
communication reliability

In CDMA transmitter, a R =1/2K=9 convolution encoder is deployed. R=1/2 Means the


encoder produces 2 code bits for every incoming data information bit , and K=9 Means the
encoder has a constraint length of 9; it's a 9 stage shift register. . A diagram of the Convolution
encoder is shown in Figure .

Figure: Rate1/2 ConvolutionalEncoderforRateset2

Each code bit, 0C and C 1 as shown in Figure3 .2 is produced according to the two
specified generating polynomials g 0 and g 1,which perform modulo-2 addition with the value
of specific stages of the shift register.

G0(x)=X^9+X^8+X^6+x^4+x^3+x^2+x^1

G1(x)=x^9+x^5+x^4+x^3+x^1

As an illustration, 0C is calculated by performing modulo addition to the value in


stage1,3,4,5,6,8,and 9 of the shift register as specified by the generator polynomial ;

Then, before the encoded data can be fed into the block interleaver, which is fixed at
19.2kbps, the encoded data may have to be repeated, depending on the original data input rate
into the convolution encoder. For example, if the data input rate to the convolution
encoderwasat4800kbps,the data would have to berepeated2timestoachieve19.2kbpsbefore
inputting to the block interleaver.
Block interleaver:

After convolution encoding and repetition, symbols are sent to a 20ms block interleaver
which is a 24 by16array matrix. The block interleaver rearranges the order of the bits in the
transmitting data bit stream. The 384 data bits contained in each 20ms data frame is
consecutively read into the block interleaver row-by-row. Then, as the block interleaver Matrix is
totally filled up, the block interleaver out puts the data bits read column-by-column. An
illustration of the operation of the block interleaver is shown in Figure

Figure: In transmitter interleaver operation

As a result, block interleaving greatly decreases the transmitting user information's


Susceptibility to error bursts, and hence increases the probability of recovering the original User
message at the receiver.

Long PN sequence:

A Long PN sequence is uniquely assigned to each user and it is a periodic long code
with Period 2^42 -1. There are two reasons for using the long PN sequence

1. Channelization the base station separates forward channel traffic by applying


different Sequences to different subscribers
2. Privacy Each user uses different long codes, and due to the pseudorandom nature of
the Codes, hence they are difficult to decode as different sequences are orthogonal to each other

There are two ways for generating the long PN sequence. One technique uses the
Electronic Serial Number (ESN) of the subscriber to generate the long PN sequence and is
therefore publicly known if the ESN is known. Another technique generates the long PN
sequence Using keys that are known only to the base station and subscriber unit, providing a
level of Privacy and preventing simple de- spreadingInIS-95,the long PN sequence is derived
form an M-sequence with m =42 (shift register length). The greater polynomials are

G(x)=x ^42 +x^ 35 +x^33 +x^31 +x^27 +x ^26 +x^25 +x^22 +x^21 +x^19 +x^18
+x^ 17 +x^16 +x ^10 +x^7 +x^6 +x^5 +x ^3+x^1

A Long Code Mask for generating long PN sequences is illustrated in Figure

Figure: The Long Code Mask for generating PN sequences


Data scrambler:

Data scrambling is performed after the block interleaver.The1.2288 cps long PN


Sequence is applied to a decimator, which keeps only the first chip out of every sixty-four
Consecutive PN chips. The symbol rate from thedecimatoris19.2ksps.Datascramblingis
Performed by modulo-2addition of the interleaver output with the decimator output symbol

Walsh coding:

Walsh coding is performed after data scrambling in the transmitter. Each data symbol
coming Out of the scrambler is replaced by 64 Walsh chip sequence. Each64 Walsh chip
sequence corresponds to a row of the 64 by 64 Walsh matrix (also called A Handmaid
matrix).The Walsh matrix contain one row of all zeros, and the remaining Rows each have an
equal number of ones and zeros and Figure3 .5shows how a Walsh matrix Is generated.

Figure: Handmaid Matrix formation

Each subscriber is assigned a different row of 64 Walsh chips, depending on the channel
Number which it is using, and each subscriber occupies a different channel. For example, if a
Subscriber uses channel 23,each symbol of the subscriber's scrambled data symbol stream is
processed using the 64 Walsh chips of the23 row of the Walsh matrix. If the scrambled data
symbol is a 0, the symbol is simply replaced by the 23 rd row Walsh chips. On the other hand if
the data symbol is a1,the symbol is replaced by the bit-inverted version of the23 Row Walsh
chips. As a result, Walsh coding increases the data rate from 19.2kspsto1.2288Msps
Quadrature modulation:

The final processing before transmitting the user's information involves Quadrate
modulation of the Walsh coded data chip stream. Quadrature modulation allows easy Acquisition
and synchronization at the mobile receiver. Quadrature modulation involves separating the
incoming data chip stream into an L data chip stream and a Q data chip stream and mixing each
with their corresponding short PN sequences.

Figure: The Quadrature spreading stage for the channel.

The short PN sequences used in the operation are generated based on the

Generating polynomials:

The IS-95 CDMA forward link communication channel is a multi path fading channels
Consists of Additive White Gaussian Noise (AWGN).

Multi path fading Channel

The communication channel is the medium which the transmitting radio signal goes
through in order to reach the receiver. The channel can be modeled as a linear filter with a time
varying channel impulse response. A channel impulse response describes the amplitude and
phase effects that the channel will impose on the transmitting radio signal, as it transmits through
the medium.IS-95 CDMA Communication channels are often modeled as a multi path fading
channel, as it is the best Modeling for a mobile communication channel.

The term "fading describes the small-scale variation of a mobile radio signal. As each
transmitting signal is represented by a number of multi path and each having different
propagation delays, the channel impulse response is different for each multi path. Therefore, not
only the channel response is time varying, the channel response is also functional dependent on
the propagation delay. Hence, the channel impulse response should actually be summarized as h
(t, t), whicht is the specific time instance, andt is the multi path delay for a fixed value
oft.As a result, the received signal in a multi path channel consists of a number of attenuated,
time delayed, and phase shifted versions of the original signal, and the base band impulse
response of a multi path channel can be written as

h (bt ,t) = a (t,t )exp[ j(2pf t (t) +f (t,t ))]d (t -t (t))

where a i(t,t ) and t i(t) are the amplitude and delay, respectively, of the i th multi path
component at time t. The phase term 2pf t (t) +f I (t, t) represents the phase shift due to free space
propagation of the multi path component, plus any additional phase shift which it encountered in
the channel. And d(t -t i (t)) is the unit impulse function for the ith multi path component with
delay t and at time instance t. Figure 3.1 .2 illustrates an example of the channel response of a
time varying discrete-time

Multi path fading channel:

A discrete-time impulse response model for a multi path radio channel

The output signal of each multi path component can always be calculated simply by
Convoluting the original signal and the channel impulse response h(t,t ) of the multi path
Channel for the multi path component

Noise in Communication Channel:


With in any communication channel, there is always noise present due to other
surrounding radio signals. These noise can be white (colorless) or colored noise, and with
interact differently with the transmitted user signal, for instance, the interaction can be additive,
multiplicative or complex. In communication systems, a transmitting signal is very Vulnerable to
noise especially with in the communication channel.

Noise is often classified as some wanted signal so interference, which is present a long
with An information signal in a communication channel. And often, the level of noise present is
incontrollable, as there are so many potential sources of noise in the channel However, by
determining the approximate power level of noise in a communication channel, the Bit-Error-
Rate (BER) of a communication system can be greatly reduced, by adjusting the power level of
the transmitting information signal. In IS-95CDMAcellularsystem, the channel noise analysis is
often based on white noise

White Noise:

White noise is a type of noise that is often exists in communication channels. It is


remarkably different from any other types of noise, due to the fact that its Power Spectral
Density (PSD) Is independent of the operating frequency The word "White is used in the sense
that white Light contains all other visible light frequencies in the band of electromagnetic
radiation.

The equivalent noise temperature of a system is defined as the temperature at which a


noisy Resistor has to be sustained such that, when it is connected to a noise less version of the
system, Will produce the same amount of noise power at the output of the system compared to
the Total noise power produced by the noise sources in the actual system.

As the auto correlation function of a signal is mathematically defined as the inverse


Fourier Transform of the PSD, hence the autocorrelation function R w(t ) of white noise.
Figure: (a) PSD and (b) Auto correlation function of white noise

It can be seen from the above equation that the autocorrelation function R w(t ) of white
noise Consists of a delta function multiplied by a factor of N/2,and occurring only at=0,and Rw
Equals 0 for t else where. Hence, for any two different time samples of white noise, they will
Be uncorrelated, no matter how close they are to each other in time. In addition, if the white
noise is also Gaussian, then it's also statistically independent, and exhibit total random ness. In
common IS-95 CDMA forward link cellular systems, communication channels are often modeled
with Additive White Gaussian Noise(AWGN).The adjective "Addictive describes the
interaction that happens between the noise interact with another signal during collision. When
AWGN comes in contact with a user signal, the Real and imaginary amplitude components of the
two signals add up and form a new signal.

Receiver:

The CDMA standard describes the processing performed in the terminal receiver as being
complementary to those of the base station modulation processes on the Forward CDMA
Channel".

The demodulation processing that the terminal receiver architecture must perform
includes Rake receiver combining (IQ demodulation and maximal combining), Walsh decoding,
long PN sequence, data descrambling, block-de interleaving, and Viterbi-decoding. These
operations all act to reverse the operation of one of the Corresponding components in the
transmitter.

A diagrammatic representation of the Rake Receiver is depicted in

The figure
Figure: CDMA receiver demodulation process.

RAKE Receiver Combining:

The RAKE receiver essentially acts to reverse the multi path distortion effects of the
channel, and in practice, a3 finger Rake receiver is used, as it is economical and provides
acceptable signal reception quality in a multi path environment. The structure of the Rake
receiver is illustrated in Figure.

Figure: A diagrammatic representation of an M-arms rake receiver


In designing a RAKE receiver, it is ideal to have as many fingers as possible. Each
finger Picks a different delayed signal However, them fore fingers a RAKE receiver contains, the
Higher the costs associated with its manufacture and operation. The Rake was originally
developed in 1950 receiver to specifically to equalize multi path effect in multi user radio
communication environments. Then a me Rake" is derived from the fact that the bank of
parallel correlates looks similar to the Fingers of a rake. The use of the Rake receiver is to comb
at multi path effect, by detecting the Multi path signals of a user information signal in the
communication channel and adding them algebraically.

There are three major operations carried out in the RAKE receiver:

1) Capturing the delayed versions of the receiver signals

(2) IQ demodulation using the references PN sequences, which are used in IQ-
modulation in the transmitter and

3) Assigning weights to the correlate outputs and performing maximal combining to


retrieve a final signal

Each correlate receives a different delayed version of the received signal. In Figure 3.10,
each correlate picks up a different delayed version of the user information signal. In order to
pass the received signal from the input 0 the output of a correlate, the Correlate must apply the
reference pilot PN sequences used in the IQ modulation stage In the transmitter. In fact, the
operation of IQ demodulation is the same as performed in IQ modulation, by simply applying the
same I and Q pilot PN sequences to the received data.

The correlator outputs are weighted according their signal strengths. The strong paths
are accentuated while the paths with no substantial contribution are suppressed. Then, all
correlator's output are combined uses the maximal ratio combining principle: The signal-to-noise
ratio of a weighted sum is maximized when the amplitude weighting is assigned in proportion to
the pertinent signal strength, where each element of the sum consists of a signal plus additive
noise of fixed power. The linear combiner output is:
y (t)=f Makzh(t)

Where a k is the weighting coefficient and z h (t) is the phase-compensated output of the
kth correlator, and M is the number of correlators in the Rake receiver.

Walsh decoder:

Walsh decoding is performed after Rake receiver combining. Each 64 Walsh coded data
Chips sequence coming out of the Rake receiver is replaced by 1 data symbol. During signal
reception, the receiver must have prior knowledge of the channel number which the subscriber is
using. Therefore, during Walsh decoding, the Walsh decoder will have two buffers storing the
normal and bit inverted versions of the row of Walsh chips Sequence in the Handamard matrix,
specified by the channel number.

Hence, when every64 Walsh coded data chips sequence entering the Walsh decoder, the
decoder checks the 64 Chips sequence against the two buffers, and decides which data symbol to
replace them. If the Incoming 64 chips sequence matches the normal version, then they will be
replayed by a data Symbol "0', and if they match the bit inverted version, they will instead be
replaced by a data symbol"1'. However, it is reason able to suggest that bit errors may occur in
each sequence of 64chips Due to noise and interference before entering the Walsh decoder, and
the bits may not exactly match with the bit values of the buffer. But that's all being taken care of,
as the Walsh decoder will compare which buffer shows the least number of bit differences
against the incoming data before replacing.

Long PN sequence:

The applying for long PN sequence to the received user signal is to remove the effect of
Long PN sequence applied to the user data bit stream at the transmitter. The receiver will have
prior knowledge of the starting long PN sequence used by the receiver, and hence assign the shift
register stage values to the sequence, and generate all the same Subsequent PN sequences used
by the transmitter. All long PN sequences generated at the receiver will also have period 2^42-1
long. The PN Sequences are generated at the receiver using the same generating techniques and
generating Polynomials as in the transmitter.
Data de scrambler:

Data de scrambling is performed after Walsh decoding. The 1.2288cps long PN


sequence is applied to a decimator, which keeps only the first chip out of every sixty-four
consecutive PN chips. The symbol rate from the decimator is 19.2ksps.Data descrambling is
performed by modulo-2addition of the Walsh decoded data symbol with the decimator output
symbol.

Every20ms, 384 data symbols coming out of the data scrambler are consecutively
entered into the block de interleaver column-by-column. Then, as the block de interleaver matrix
is totally filled up, the block de interleaver outputs the data bits column-by-column, as illustrated
inFigure3.11.Itisclearthatthematrixdimensionandoperationoftheblock de interleaver is a contrary
to the block interleaver.

Figure: Forward link de interleaver operation.

Viterbi decoder:

Viterbi decoding is the final processing to be performed at the receiver, and in the entire
Communication system. The purpose of the Viterbi decoding is to decode the convolution
encoded user data symbols. And the Viterbi algorithm used is the equivalence between maximum
likelihood decoding and minimum distance decoding for a binary symmetric channel.

In brief, maximum likely hood decoding means the decoder must choose the estimate for
Which the log-likelihood function log p(r|c) is maximum, where r is the received code vector
And c is the transmitted vector.

On the other hand, minimum distance decoding means a De coder must choose the
estimated that minimizes the Hamming distance between the Received vector r and the
transmitted vector c.

Hence, the Viterbi algorithm implies that a convolution code can be decoded by
selecting a Path in the code tree, whose coded sequence differs in the least number of places
from the Received sequence, or essentially, the Hamming distance between the two codes. The
Viterbi de coder user in common IS-95 CDMA forward link transmitter has a window Length of
18.Therefore, every 18 bits of convolution encoded user data bit stream enters the Viterbi
decoder and decoded. As a result, 9 bits are produced at the decoder output after each Decoding.

The algorithm works by predicting all the outputs of the convolution encoder based on
the Starting shift register states of the encoder, and find the output that has the least difference
when compared to the incoming 18 data bits sequence into the decoder. For every 20ms user
Data frame, the Viterbi decoder initially computes a code tree, with the knowledge that the
shiftregisteroftheconvolutionalencoderstartswithall0's, then compute all 512 possible18 bits
output of the encoder after 9 iterations. Then, the decoder will compare the incoming18 bits
sequence, will all the 512 possible outputs and select the one with the least Hamming distance.

Then, the Viterbi decoder regenerate its code tree by assuming the starting state of the
convolution encoder with the starting stateofthe18-bitoutputpreviouslyselected.This process is
repeated until a total of 21 decoding operations executed, which outputs a total of 378bits. And
finally, an extra of 6 zero bits are output from thedecodertogiveatotalof384 bits

While the current IS-95 CDMA system holds up to 600 to 800 million users world wide
today, the demand of new services is continuously increasing. Services such as real-time voice
with video conferencing and high speed internet access on mobiles have always been the dream
of the IS-95 CDMA system users.

However, these services are not possible on an IS-95 CDMA system, due to it's data
transmission rate which can only operate up to 9600 bps while the minimum required data
transmission rate for real-time voice with image conferencing and seamless internet access
requires a minimum bandwidth of 0.3Mbps, which is atleast 30 times the maximum data
transmission rate capability of the IS-95 CDMA system. Therefore, these unachievable services
have always been classified as the features which the next generation of CDMA.

In comparison to IS 95-CDMA systems which allocate a transmission bandwidth of


1.25MHz for multi-user, CDMA20001X systems can allocate a 1.25MHz transmission
bandwidth to just one user. More specifically, the single is allocated all of the orthogonal spread
in sequences. This greatly improves the data transmission rate by providing an average data
through put rate of 144kbps per user.

Systems should provide as a result of the on going world wide research and technology
break through son CDMA Cellular systems, CDMA20001XRTT is the new CDMA system
standard developed by the TIA and announced by the ITU in 2000 as the starter of the third
Generation CDMA systems CDMA20001Xrtt.

CDMA20001xEV-DO

CDMA2000 1xEV-DO technology offers near-broadband packet data speeds for wireless
access to the Internet. CDMA stands for Code Division Multiple Access, and 1xEV-DO refers to
1x Evolution-Data Optimized. CDMA2000 1xEV-DO is an alternative to Wideband CDMA
(WCDMA). Both are considered 3G technologies.

Compared to CDMA2000, 1xEV provides substantially higher data-only rates on 1X


systems.1xEV-DO require a separate carrier for data. However, in situations where simultaneous
voice and data services are needed, this carrier can be handed-off to a 1X carrier. By allocating a
separate carrier for data, the1xEV-DO system can deliver upto 2.4Mbps to each user and with the
vision of enhancing the user rate, the CDMA20001XRTT is further followed by its continuation:
CDMA20001xEV.

CDMA20001xEV-DV
1xev-dv solutions will be available by 2004.IxEV-DV will bring data and voice services
for CDMA 2000 back into one carrier, where a IxEV-DV carrier will not only provide high speed
data and voice simultaneously, but will also capable of delivering real-time packet services.

CDMA20001X

The first of the of third-generation (3G) wireless standards to be commercially deployed,


CDMA2000 1X laid the groundwork for the higher speed data rates available today in many
markets around the world that provide consumers and professionals with no-compromise
wireless connectivity.

In comparison to IS 95- CDMA systems which allocate a transmission bandwidth of


1.25MHz. For multiusers, CDMA20001X systems can allocate a 1.25MHz transmission
bandwidth to just one user. More specifically, the single is allocated all of the orthogonal
spreading sequences. This greatly improves the data transmission rate by providing an average
data throughput rate of 144kbps per user.

Figure 5.1.4 CDMA20001X systems bandwidth allocation


FUTURE CDMA 2000 TECHNOLOGIES

Third Generation (3G) is the term used to describe the latest generation of mobile
services which provide better quality voice and high-speed data, access to the Internet and
multimedia services. The International Telecommunication Union (ITU), working with industry
bodies from around the world, has defined the technical requirements and standards as well as
the use of spectrum for 3G systems under the IMT-2000 (International Mobile
Telecommunications-2000) program.
The ITU requires that IMT-2000 (3G) networks, among other capabilities, deliver
improved system capacity and spectrum efficiency over 2G systems and that they support data
services at minimum transmission rates of 144 kbps in mobile (outdoor) and 2 Mbps in fixed
(indoor) environments.
The ultimate 3G solution for CDMA uses multi carrier techniques that group adjacent
CDMA2000 1.25 MHZ radio channels together for increased bandwidth. The TIA already
announced the CDMA20002xRTT (2x1.25MHzchannels) and 3xRTT (3x1.25MHz Channels)
standards and it can be expected that the data rate will continue to increase substantially after
each successive evolution. And each evolution of CDMA2000 will continue to be backwards
compatible with today's networks and forwards compatible with future evolutions.
CHAPTER-3
BLOCK DIAGRAM
In previous works use of Discrete Fourier Transform was proposed for the
implementation of OFDM . Wavelet transform show the potential to replace the DFT in OFDM.
Wavelet transform is a tool for analysis of the signal in time and frequency domain jointly. It is a
multi resolution analysis mechanism where input signal is decomposed into different frequency
components for the analysis with particular resolution matching to scale . Using any particular
type of wavelet filter the system can be designed according to the need and also the multi
resolution signal can be generated by the use of wavelets. By the use of varying wavelet filter,
one can design waveforms with selectable time/frequency partitioning for multi user application .
Wavelets possess better orthogonality and have localization both in time and frequency domain .
Because of good orthogonality wavelets are capable of reducing the power of the ISI and ICI,
which results from loss of orthogonality.

To reduce ISI and ICI in conventional OFDM system use of cyclic prefix is there, which
uses 20% of available bandwidth, so results in bandwidth inefficiency but this cyclic prefix is not
required in wavelet based OFDM system. Complexity can also be reduced by using wavelet
transform as compared with the Fourier transform because in wavelet complexity is O[N] as
compared with complexity of Fourier transform of O[N log2N]. Wavelet based OFDM is simple
and the DFT based OFDM is complex. Wavelet based OFDM is flexible as well and because
better orthogonality is provided by it, there is no any need of cyclic prefixing in wavelet based
OFDM, which is required in DFT based OFDM to maintain orthogonality so wavelet based
system is more bandwidth efficient as compared with the DFT based OFDM. In discrete wavelet
transform (DWT), input signal presented will pass through several different filters and will be
decomposed into low pass and high pass bands through the filters. During decomposition the
high pass filter will remove the frequencies below half of the highest frequency and low pass
filter will remove frequencies that are above half of the highest frequency.

The decomposition halves the time resolution because half of the samples are used to
characterize the signal similarly frequency resolution will be doubled and this decomposition
process will be repeated again for obtaining the wavelet coefficients of required level. Two types
of coefficients are obtained through processing, first ones are called detailed coefficients
obtained through high pass filter and second ones are called coarse approximations obtained

through low pass filter related with scaling process. After passing the data through filters
the decimation process will be performed. The whole procedure will continue until the required
level is obtained. This decomposition can be given as

Where x[n] is the original signal, g[n] is impulse response of half-band high pass filter
and h[n] is impulse response of half-band low pass filter. yhigh[k] and ylow [k] are obtained
after filtering and decimation by a factor of 2.

In inverse discrete wavelet transform (IDWT), the reverse process of decomposition is


performed, so here firstly upsampling is done then the signal is passed through the filters. The
data obtained after filtering is combined to obtain reconstructed data. Number of levels during
reconstruction will be same as that of the decomposition.

In this proposed model we are using IDWT and DWT at the place of IDFT and DFT.
AWGN channel is used for transmission and cyclic prefixing is not used. Here first of all
conventional encoding is done followed by interleaving then data is converted to decimal form
and modulation is done next. After modulation the pilot insertion and sub carrier mapping is
done then comes the IDWT of the data, which provides the orthogonality to the subcarriers.
IDWT will convert time domain signal to the frequency domain. After passing through the
channel on the signal DWT will be performed and then pilot synchronization where the inserted
pilots at the transmitter are removed then the demodulation is done.

Demodulated data is converted to binary form and the de-interleaved and decoded to
obtain the original data transmitted.
CHAPTER 4

BACK GROUND
Introduction:

A set of requirements are specified for 4 th generation of cellular systems by International


Telecommunication Union Radio communication Sector (ITU-R). The requirement of data rate
was specified in International Mobile Telecommunications Advanced project (IMT-Advanced). 3
rd Generation Partnership Project (3GPP) was established in 1998 [1]. 3GPP started working on
the LTE project to define the Radio Access Network (RAN) and core network [1]. 3GPPs
candidate for 4G was LTE-Advanced. OFDM is one of the main techniques employed in LTE to
enhance the data rate.
Spectrum efficiency and flexible utilization of spectrum is highly required today for different
wireless communication related applications. In multicarrier communication the main idea is to
divide the data into several streams and using them to modulate different carriers. The two main
advantages of multicarrier communication are, first one is there is no requirement of signal
enhancement for noise which is required in single carrier because of the equalizers and second is
because of long symbol duration reduced effect of fading [2]. In OFDM subcarriers used are
orthogonal to each other. Orthogonality causes the subcarriers to overlap in frequency
domain, so the bandwidth efficiency is obtained without any ICI
Wavelet transform is used to analyze signals by the coefficients of wavelets in both time and
frequency domain. Basis functions of transform are localized in both time and frequency domain.
Here elementary waveforms are not sine and cosine waveforms like in Fourier transform. ISI and
ICI are generally caused by loss of orthogonality between the carriers caused by multipath
propagation of the signal in Discrete Fourier Transform (DFT) based OFDM. ISI is between
successive symbols of same sub-carrier and ICI is among different signals at different
subcarriers. Both are avoided by use of cyclic prefixing which causes power loss and bandwidth
inefficiency in DFT based OFDM
Introduction to Orthogonal Frequency Division Multiplexing (OFDM):

Orthogonal Frequency Division Multiplexing (OFDM) has been attracting substantial


attention due to its excellent performance under severe channel condition. The rapidly growing
application of OFDM includes Wi-MAX, DVB/DAB and 4G wireless systems.

Initial proposals for OFDM were made in the 60s and the 70s. It has taken more than a

quarter of a century for this technology to move from the research domain to the industry.

The concept of OFDM is quite simple but the practicality of implementing it has many

complexities. So, it is a fully software project. OFDM depends on Orthogonality principle.

Orthogonality means, it allows the sub carriers, which are orthogonal to each other, meaning

that cross talk between co-channels is eliminated and inter-carrier guard bands are not

required. This greatly simplifies the design of both the transmitter and receiver, unlike

conventional FDM; a separate filter for each sub channel is not required.

Orthogonal Frequency Division Multiplexing (OFDM) is a digital multi carrier


modulation scheme, which uses a large number of closely spaced orthogonal sub-carriers. A
single stream of data is split into parallel streams each of which is coded and modulated on to a
subcarrier, a term commonly used in OFDM systems. Each sub-carrier is modulated with a
conventional modulation scheme (such as quadrature amplitude modulation) at a low symbol
rate, maintaining data rates similar to conventional single carrier modulation schemes in the
same bandwidth. Thus the high bit rates seen before on a single carrier is reduced to lower bit
rates on the subcarrier.
Orthogonal Frequency Division Multiplexing (OFDM) is a multicarrier transmission
technique, which divides the available spectrum into many carriers, each one being modulated by
a low rate data stream. OFDM is similar to FDMA in that the multiple user access is achieved by
subdividing the available bandwidth into multiple channels that are then allocated to users.
However, OFDM uses the spectrum much more efficiently by spacing the channels much closer
together. This is achieved by making all the carriers orthogonal to one another, preventing
interference between the closely spaced carriers.

Coded Orthogonal Frequency Division Multiplexing (COFDM) is the same as OFDM


except that forward error correction is applied to the signal before transmission. This is to
overcome errors in the transmission due to lost carriers from frequency selective fading, channel
noise and other propagation effects. For this discussion the terms OFDM and COFDM are used
interchangeably, as the main focus of this thesis is on OFDM, but it is assumed that any practical
system will use forward error correction, thus would be COFDM.

In FDMA each user is typically allocated a single channel, which is used to transmit all the

user information. The bandwidth of each channel is typically 10 kHz-30 kHz for voice

communications. However, the minimum required bandwidth for speech is only 3 kHz. The

allocated bandwidth is made wider than the minimum amount required preventing channels

from interfering with one another. This extra bandwidth is to allow for signals from

neighboring channels to be filtered out, and to allow for any drift in the center frequency of

the transmitter or receiver. In a typical system up to 50% of the total spectrum is wasted due

to the extra spacing between channels.

This problem becomes worse as the channel bandwidth becomes narrower, and the

frequency band increases. Most digital phone systems use vocoders to compress the digitized

speech. This allows for an increased system capacity due to a reduction in the bandwidth

required for each user. Current vocoders require a data rate somewhere between 4- 13kbps,
with depending on the quality of the sound and the type used. Thus each user only requires a

minimum bandwidth of somewhere between 2-7 kHz, using QPSK modulation. However,

simple FDMA does not handle such narrow bandwidths very efficiently. TDMA partly

overcomes this problem by using wider bandwidth channels, which are used by several users.

Multiple users access the same channel by transmitting in their data in time slots. Thus, many

low data rate users can be combined together to transmit in a single channel, which has a

bandwidth sufficient so that the spectrum can be used efficiently.

There are however, two main problems with TDMA. There is an overhead associated
with the change over between users due to time slotting on the channel. A change over time must
be allocated to allow for any tolerance in the start time of each user, due to propagation delay
variations and synchronization errors. This limits the number of users that can be sent efficiently
in each channel. In addition, the symbol rate of each channel is high (as the channel handles the
information from multiple users) resulting in problems with multipath delay spread.

OFDM overcomes most of the problems with both FDMA and TDMA. OFDM splits the
available bandwidth into many narrow band channels (typically 100-8000). The Carriers for each
channel are made orthogonal to one another, allowing them to be spaced very close together,
with no overhead as in the FDMA example. Because of this there is no great need for users to be
time multiplex as in TDMA, thus there is no overhead associated with switching between users.

The orthogonality of the carriers means that each carrier has an integer number of cycles
over a symbol period. Due to this, the spectrum of each carrier has a null at the center frequency
of each of the other carriers in the system. This results in no interference between the carriers,
allowing then to be spaced as close as theoretically possible. This overcomes the problem of
overhead carrier spacing required in FDMA. Each carrier in an OFDM signal has a very narrow
bandwidth (i.e. 1 kHz), thus the resulting symbol rate is low. This results in the signal having a
high tolerance to multipath delay spread, as the delay spread must be very long to cause
significant ISI (e.g. > 500usec).
In practice, OFDM signals are generated and detected using the Fast Fourier
Transform algorithm. OFDM has developed into a popular scheme for wideband digital
communication, wireless as well as copper wires. Actually; FDM systems have been common for
many decades. However, in FDM, the carriers are all independent of each other. There is a guard
period in between them and no overlap whatsoever. This works well because in FDM system
each carrier carries data meant for a different user or application.

FM radio is an FDM system. FDM systems are not ideal for what we want for wideband
systems. Using FDM would waste too much bandwidth. This is where OFDM makes sense. In
OFDM, subcarriers overlap. They are orthogonal because the peak of one subcarrier occurs when
other subcarriers are at zero. This is achieved by realizing all the subcarriers together using
Inverse Fast Fourier Transform (IFFT). The demodulator at the receiver is parallel channels from
an FFT block. Note that each subcarrier can still be modulated independently.

3.1.1.OFDM Carriers

As for mentioned, OFDM is a special form of MCM and the OFDM time domain
waveforms are chosen such that mutual orthogonality is ensured even though sub-carrier spectra
may over-lap. With respect to OFDM, it can be stated that orthogonality is an implication of a
definite and fixed relationship between all carriers in the collection.

It means that each carrier is positioned such that it occurs at the zero energy frequency point of
all other carriers. The since function, illustrated in Fig. 2.1 exhibits this property and it is used as
a carrier in an OFDM system.
fu is the sub-carrier spacing

Figure .2.1. OFDM sub carriers in the frequency domain

3.1.2. OFDM generation

To generate OFDM successfully the relationship between all the carriers must be carefully

controlled to maintain the orthogonality of the carriers. For this reason, OFDM is generated

by firstly choosing the spectrum required, based on the input data, and modulation scheme

used. Each carrier to be produced is assigned some data to transmit. The required amplitude

and phase of the carrier is then calculated based on the modulation scheme (typically

differential BPSK, QPSK, or QAM). The required spectrum is then converted back to its time

domain signal using an Inverse Fourier Transform. In most applications, an Inverse Fast

Fourier Transform (IFFT) is used. The IFFT performs the transformation very efficiently, and

provides a simple way of ensuring the carrier signals produced are orthogonal.

. The IFFT performs the reverse process, transforming a spectrum (amplitude and phase of
each component) into a time domain signal. An IFFT converts a number of complex data points,
of length, which is a power of 2, into the time domain signal of the same number of points. Each
data point in frequency spectrum used for an FFT or IFFT is called a bin. The orthogonal carriers
required for the OFDM signal can be easily generated by setting the amplitude and phase of each
bin, then performing the IFFT. Since each bin of an IFFT corresponds to the amplitude and phase
of a set of orthogonal sinusoids, the reverse process guarantees that the carriers generated are
orthogonal.

Figure. 2.2 OFDM Block Diagram

Fig. 2.2 shows the setup for a basic OFDM transmitter and receiver. The signal generated
is a base band, thus the signal is filtered, then stepped up in frequency before transmitting the
signal. OFDM time domain waveforms are chosen such that mutual orthogonality is ensured
even though sub-carrier spectra may overlap. Typically QAM or Differential Quadrature Phase
Shift Keying (DQPSK) modulation schemes are applied to the individual sub carriers. To prevent
ISI, the individual blocks are separated by guard intervals wherein the blocks are periodically
extended.

The Fast Fourier Transform (FFT) transforms a cyclic time domain signal into its
equivalent frequency spectrum. This is done by finding the equivalent waveform, generated by a
sum of orthogonal sinusoidal components. The amplitude and phase of the sinusoidal
components represent the frequency spectrum of the time domain signal.
However, OFDM is not without drawbacks. One critical problem is its high peak-to-
average power ratio (PAPR). High PAPR increases the complexity of analog-to-digital (A/D) and
digital-to-analog (D/A) converters, and lowers the efficiency of power amplifiers. Over the past
decade various PAPR reduction techniques have been proposed, such as block coding, selective
mapping (SLM) and tone reservation, just to name a few . Among all these techniques the
simplest solution is to clip the transmitted signal when its amplitude exceeds a desired threshold.
Clipping is a highly nonlinear process, however. It produces significant out-of-band
interference(OBI).

A good remedy for the OBI is the so-called companding. The technique soft
compresses, rather than hard clips, the signal peak and causes far less OBI. The method was
first proposed in, which employed the classical -law transform and showed to be rather
effective. Since then many different companding transforms with better performances have been
Published. This paper proposes and evaluates a new companding algorithm. The algorithm uses
the special airy function and is able to offer an improved bit error rate (BER) and minimized OBI
while reducing PAPR effectively. The paper is organized as follows. In the next section the PAPR
problem in OFDM is briefly reviewed.

PAPR IN OFDM

OFDM is a powerful modulation technique being used in many new and emerging
broadband communication systems.

Advantages:

Robustness against frequency selective fading and time dispersion.

Transmission rates close to capacity can be achieved.

Low computational complexity implementation (FFT).

Drawbacks:

Sensitivity to frequency offset.


Sensitivity to nonlinear amplification.

Compensation techniques for nonlinear effects

Linearization (digital predistortion).

Peak-to-average power ratio (PAPR) reduction.

Post-processing.

PAPR-reduction techniques:

Varying PAPR-reduction capabilities, power, bandwidth and complexity


requirements.

The performance of a system employing these techniques has not been fully
analyzed

PAPR is a very well known measure of the envelope fluctuations of a MC signal

Used as figure of merit.

The problem of reducing the envelope fluctuations has turned to reducing PAPR.

In this paper we ...

present a quantitative study of PAPR and NL distortion

simulate an OFDM-system employing some of these techniques

Motivation: evaluate the performance improvement capabilities of PAPR-reducing methods.

Orthogonal Frequency Division Multiplexing


An OFDM signal can be expressed as

Complex baseband modulated symbol

N number of subcarriers

If the OFDM signal is sampled at , the complex samples can be described as

N -1
sn = 1 Sk e j 2p kn / N , 0, N -1
n

N k =0

Peak-to-average power ratio

Let be the m-th OFDM symbol, then its PAPR is defined as

The CCDF of the PAPR of a non-oversampled OFDM signal is

( )
N
Pr ( g > g 0 ) = 1 - 1- e-g 0

s( m )
2

PAPR m =

Es( ) N
2
m



CCDF of PAPR increases with the number of subcarriers in the OFDM system.

It is widely believed that the more subcarriers are used in a OFDM system, the
worse the distortion caused by the nonlinearity will be.

In-band and out-of-band distortion

If N is large enough, the OFDM signal can be approximated as a complex Gaussian distributed
random variable. Thus its envelope is Rayleigh distributed

- x2
f X ( x ) = 2 x2 e s 2 ,
s

p p
with E X

= s and var 1- ,
X = s 2

2 4
Where the variance of the real and imaginary parts of the signal is

Buss gang theorem

An interesting result is that the output of a NL with Gaussian input (OFDM) can be written as:

Rxy ( t1 )
y( t) =a x( t) + d ( t) , where a =
Rxx ( t1 )
Considerations on PAPR reduction

In order to improve the system performance, PAPR should predict the amount of
distortion introduced by the nonlinearity
PAPR increases with the number of subcarriers in the OFDM signal.

The distortion term and the uniform attenuation and rotation of the constellation
only depend on the back-off.

The effect of nonlinearity to an OFDM signal is not clearly related to its PAPR

The effective energy per bit at the input of the nonlinearity is

where Eo is the average energy of the signal at the input of the nonlinearity, K is the

Number of bits per symbol and p is the power efficiency.

There will only be a BER performance improvement when the effect of reducing the
in-band distortion becomes noticeable and more important than the loss of power
efficiency.

This is not taken into account in the majority of the PAPR reducing methods.

Let (0), (1), .. ( 1) represent the data sequence to be transmitted in an OFDM symbol with
subcarriers. The baseband representation of the OFDM symbol is given by:

Where is the duration of the OFDM symbol. According to the central limit theorem, when is
large, both the real and imaginary parts of () become Gaussian distributed, each with zero
mean and a variance of E[()2]/2, and the amplitude of the OFDM symbol follows a Rayleigh
distribution. Consequently it is possible that the maximum amplitude of OFDM signal may well
exceed its average amplitude. Practical hardware (e.g. A/D and D/A converters, power
amplifiers) has finite dynamic range; therefore the peak amplitude of OFDM signal must be
limited. PAPR is mathematically defined as:
It is easy to see from above that PAPR reduction may be achieved by decreasing the numerator
max [()2], increasing the denominator (1/T) 0 ()2 , or both.

The effectiveness of a PAPR reduction technique is measured by the complementary cumulative


distribution function

(CCDF), which is the probability that PAPR exceeds some threshold, i.e.:

CCDF = Probability (PAPR > 0), where 0 is the threshold.

MULTIPLE ACCESS TECHNIQUES:

Multiple access schemes are used to allow many simultaneous users to use the same fixed
bandwidth radio spectrum. In any radio system, the bandwidth, which is allocated to it, is always
limited. For mobile phone systems the total bandwidth is typically 50 MHz, which is split in half
to provide the forward and reverse links of the system.

Sharing of the spectrum is required in order increase the user capacity of any wireless
network. FDMA, TDMA and CDMA are the three major methods of sharing the available
bandwidth to multiple users in wireless system. There are many extensions, and hybrid
techniques for these methods, such as OFDM, and hybrid TDMA and FDMA systems. However,
an understanding of the three major methods is required for understanding of any extensions to
these methods.

Frequency Division Multiple Accesses (FDMA):

In Frequency Division Multiple Access (FDMA), the available bandwidth is subdivided


into a number of narrower band channels. Each user is allocated a unique frequency band in
which to transmit and receive on. During a call, no other user can use the same frequency band.
Each user is allocated a forward link channel (from the base station to the mobile phone)
and a reverse channel (back to the base station), each being a single way link. The transmitted
signal on each of the channels is continuous allowing analog transmissions. The bandwidths of
FDMA channels are generally low (30 kHz) as each channel only supports one user. FDMA is
used as the primary breakup of large allocated frequency bands and is used as part of most multi-
channel systems.

Fig. 1.2 & Fig. 1.3 show the allocation of the available bandwidth into several channels.

Time Division Multiple Access:

Time Division Multiple Access (TDMA) divides the available spectrum into multiple time
slots, by giving each user a time slot in which they can transmit or receive. Fig. 1.4 shows how
the time slots are provided to users in a round robin fashion, with each user being allotted one
time slot per frame. TDMA systems transmit data in a buffer and burst method, thus the
transmission of each channel is non-continuous.

Fig 1.4 TDMA scheme, where each user is allocated a small time slot
The input data to be transmitted is buffered over the previous frame and burst transmitted at a

higher rate during the time slot for the channel. TDMA cannot send analog signals directly

due to the buffering required, thus are only used for transmitting digital data. TDMA can

suffer from multipath effects, as the transmission rate is generally very high. This leads the

multipath signals causing inter-symbol interference. TDMA is normally used in conjunction

with FDMA to subdivide the total available bandwidth into several channels. This is done to

reduce the number of users per channel allowing a lower data rate to be used. This helps

reduce the effect of delay spread on the transmission. Fig. 1.5 shows the use of TDMA with

FDMA. Each channel based on FDMA, is further subdivided using TDMA, so that several

users can transmit of the one channel. This type of transmission technique is used by most

digital second generation mobile phone systems. For GSM, the total allocated bandwidth of

25MHz is divided into 125, 200 kHz channels using FDMA. These channels are then

subdivided further by using TDMA so that each 200 kHz channel allows 8-16 users.

Fig. 1.5 TDMA/FDMA hybrid, showing that the bandwidth is split into frequency channels and time slots.

Code Division Multiple Access:

Code Division Multiple Access (CDMA) is a spread spectrum technique that uses neither

frequency channels nor time slots. In CDMA, the narrow band message (typically digitized voice
data) is multiplied by a large bandwidth signal, which is a pseudo random noise code (PN code).

All users in a CDMA system use the same frequency band and transmit simultaneously. The

transmitted signal is recovered by correlating the received signal with the PN code used by the

transmitter. Fig. 1.6 shows the general use of the spectrum using CDMA.

Some of the properties that have made CDMA useful are: Signal hiding and non-
interference with existing systems, Anti-jam and interference rejection, Information security,
Accurate Ranging, Multiple User Access, Multipath tolerance.

Fig. 1.6 Code Division Multiple Access (CDMA)

Fig.1.7 shows the process of a CDMA transmission. The data to be transmitted (a) is spread

before transmission by modulating the data using a PN code. This broadens the spectrum as

shown in (b). In this example the process gain is 125 as the spread spectrum bandwidth is

125 times greater the data bandwidth. Part (c) shows the received signal. This consists of the

required signal, plus background noise, and any interference from other CDMA users or

radio sources.

The received signal is recovered by multiplying the signal by the original spreading code.

This process causes the wanted received signal to be dispread back to the original transmitted

data. However, all other signals, which are uncorrelated to the PN spreading code used,
become more spread. The wanted signal in (d) is then filtered removing the wide spread

interference and noise signals.

Fig . 1.7. Basic CDMA Generation.

CDMA Generation:

CDMA is achieved by modulating the data signal by a pseudo random noise sequence
(PN code), which has a chip rate higher than the bit rate of the data. The PN code sequence is a
sequence of ones and zeros (called chips), which alternate in a random fashion. The data is
modulated by modular-2 adding the data with the PN code sequence. This can also be done by
multiplying the signals, provided the data and PN code is represented by 1 and -1 instead of 1
and 0. Fig. 1.8 shows a basic CDMA transmitter.

Fig. 1.8 Simple direct sequence modulator


The PN code used to spread the data can be of two main types. A short PN code

(Typically 10-128 chips in length), can be used to modulate each data bit. The short PN code is
then repeated for every data bit allowing for quick and simple synchronization of the receiver.
Fig.1.9 shows the generation of a CDMA signals using a 10-chip length short code. Alternatively
a long PN code can be used. Long codes are generally thousands to millions of chips in length,
thus are only repeated infrequently. Because of this they are useful for added security as they are
more difficult to decode.

Fig.1.9 Direct sequence signals

Orthogonal Frequency Division Multiplexing (OFDM) is a multicarrier transmission

technique, which divides the available spectrum into many carriers, each one being

modulated by a low rate data stream. OFDM is similar to FDMA in that the multiple user

access is achieved by subdividing the available bandwidth into multiple channels that are

then allocated to users. However, OFDM uses the spectrum much more efficiently by spacing

the channels much closer together. This is achieved by making all the carriers orthogonal to

one another, preventing interference between the closely spaced carriers.

Coded Orthogonal Frequency Division Multiplexing (COFDM) is the same as OFDM


except that forward error correction is applied to the signal before transmission. This is to
overcome errors in the transmission due to lost carriers from frequency selective fading, channel
noise and other propagation effects. For this discussion the terms OFDM and COFDM are used
interchangeably, as the main focus of this thesis is on OFDM, but it is assumed that any practical
system will use forward error correction, thus would be COFDM.

In FDMA each user is typically allocated a single channel, which is used to transmit all the user

information. The bandwidth of each channel is typically 10 kHz-30 kHz for voice

communications. However, the minimum required bandwidth for speech is only 3 kHz. The

allocated bandwidth is made wider than the minimum amount required preventing channels from

interfering with one another. This extra bandwidth is to allow for signals from neighboring

channels to be filtered out, and to allow for any drift in the center frequency of the transmitter or

receiver. In a typical system up to 50% of the total spectrum is wasted due to the extra spacing

between channels.

This problem becomes worse as the channel bandwidth becomes narrower, and the frequency

band increases. Most digital phone systems use vocoders to compress the digitized speech. This

allows for an increased system capacity due to a reduction in the bandwidth required for each

user. Current vocoders require a data rate somewhere between 4- 13kbps, with depending on the

quality of the sound and the type used. Thus each user only requires a minimum bandwidth of

somewhere between 2-7 kHz, using QPSK modulation. However, simple FDMA does not handle

such narrow bandwidths very efficiently. TDMA partly overcomes this problem by using wider

bandwidth channels, which are used by several users. Multiple users access the same channel by

transmitting in their data in time slots. Thus, many low data rate users can be combined together

to transmit in a single channel, which has a bandwidth sufficient so that the spectrum can be used

efficiently.
There are however, two main problems with TDMA. There is an overhead associated
with the change over between users due to time slotting on the channel. A change over time must
be allocated to allow for any tolerance in the start time of each user, due to propagation delay
variations and synchronization errors. This limits the number of users that can be sent efficiently
in each channel. In addition, the symbol rate of each channel is high (as the channel handles the
information from multiple users) resulting in problems with multipath delay spread.

OFDM overcomes most of the problems with both FDMA and TDMA. OFDM splits the
available bandwidth into many narrow band channels (typically 100-8000). The carriers for each
channel are made orthogonal to one another, allowing them to be spaced very close together,
with no overhead as in the FDMA example. Because of this there is no great need for users to be
time multiplex as in TDMA, thus there is no overhead associated with switching between users.

The orthogonality of the carriers means that each carrier has an integer number of cycles
over a symbol period. Due to this, the spectrum of each carrier has a null at the center frequency
of each of the other carriers in the system. This results in no interference between the carriers,
allowing then to be spaced as close as theoretically possible. This overcomes the problem of
overhead carrier spacing required in FDMA. Each carrier in an OFDM signal has a very narrow
bandwidth (i.e. 1 kHz), thus the resulting symbol rate is low. This results in the signal having a
high tolerance to multipath delay spread, as the delay spread must be very long to cause
significant ISI (E.g. > 500usec).

Orthogonality:
In geometry, orthogonal means, "involving right angles" (from Greek ortho, meaning
right, and gon meaning angled). The term has been extended to general use, meaning the
characteristic of being independent (relative to something else). It also can mean: non-redundant,
non-overlapping, or irrelevant. Orthogonality is defined for both real and complex valued
functions. The functions m(t) and n(t) are said to be orthogonal with respect to each other over
the interval a < t < b if they satisfy the condition:
b

(t ) (t )dt = 0,
*

m m
Where n m (6)
a

OFDM generation:

To generate OFDM successfully the relationship between all the carriers must be carefully

controlled to maintain the orthogonality of the carriers. For this reason, OFDM is generated by

firstly choosing the spectrum required, based on the input data, and modulation scheme used.

Each carrier to be produced is assigned some data to transmit. The required amplitude and phase

of the carrier is then calculated based on the modulation scheme (typically differential BPSK,

QPSK, or QAM).

The required spectrum is then converted back to its time domain signal using an Inverse
Fourier Transform. In most applications, an Inverse Fast Fourier Transform (IFFT) is used. The
IFFT performs the transformation very efficiently, and provides a simple way of ensuring the
carrier signals produced are orthogonal.

The Fast Fourier Transform (FFT) transforms a cyclic time domain signal into its
equivalent frequency spectrum. This is done by finding the equivalent waveform, generated by a
sum of orthogonal sinusoidal components. The amplitude and phase of the sinusoidal
components represent the frequency spectrum of the time domain signal.

. The IFFT performs the reverse process, transforming a spectrum (amplitude and phase of
each component) into a time domain signal. An IFFT converts a number of complex data points,
of length, which is a power of 2, into the time domain signal of the same number of points. Each
data point in frequency spectrum used for an FFT or IFFT is called a bin.

The orthogonal carriers required for the OFDM signal can be easily generated by setting the
amplitude and phase of each bin, then performing the IFFT. Since each bin of an IFFT
corresponds to the amplitude and phase of a set of orthogonal sinusoids, the reverse process
guarantees that the carriers generated are orthogonal.
Fig. 2.2 OFDM Block Diagram

Fig. 2.2 shows the setup for a basic OFDM transmitter and receiver.

The signal generated is a base band, thus the signal is filtered, then stepped up in
frequency before transmitting the signal. OFDM time domain waveforms are chosen such that
mutual orthogonality is ensured even though sub-carrier spectra may overlap. Typically QAM or
Differential Quadrature Phase Shift Keying (DQPSK) modulation schemes are applied to the
individual sub carriers. To prevent ISI, the individual blocks are separated by guard intervals
wherein the blocks are periodically extended
In this letter, we reformulate which signal characteristics to consider beyond dynamic
range that can be linked directly with BER. In the analysis, we assume that the nonlinear
amplifier chain includes a predistorter prior to the HPA, namely PDHPA. The PD-HPA has a zero
AM-PM characteristic [r(t)], and an AM-AM characteristic given by

Where r(t) is the input to the PD-HPA and is the PD-HPA saturation (clipping)
threshold. Assuming that the baseband CDMA signal is characterized as a band-limited complex

Gaussian process, we establish analytical expressions for the signal characteristics, with respect
to the IBO level, that lead to BER degradation. Moreover, we develop an analytic expression for
the BER performance in presence of the considered nonlinear amplifier chain.
CHAPTER-5
RESULT
SIMULATION RESULTS OF BER PERFORMANCE EVALUATION

By using MATLAB performance characteristic of DFT based OFDM and wavelet based
OFDM are obtained for different modulations that are used for the LTE, as shown in figures 3-5.
Modulations that could be used for LTE are QPSK, 16 QAM and 64 QAM (Uplink and
downlink). QPSK does not carry data at very high speed. When signal to noise ratio is of good
quality then only higher modulation techniques can be used. Lower forms of modulation (QPSK)
does not require high signal to noise ratio.

For the purpose of simulation, signal to noise ratio (SNR) of different values are
introduced through SUI channel. Data of 9600 bits is sent in the form of 100 symbols, so one
symbol is of 96 bits. Averaging for a particular value of SNR for all the symbols is done and
BER is obtained and same process is repeated for all the values of SNR and final BERs are
obtained.

Firstly the performance of DFT based OFDM and wavelet based OFDM are obtained for
different modulation techniques. Different wavelet types daubechies2 and haar is used in wavelet
based OFDM for QPSK, 16-QAM, 64-QAM

It is clear from the fig. 3, fig. 4 and fig. 5 that the BER performance of wavelet based
OFDM is better than the DFT based OFDM. Fig. 3 indicates that db2 performs better when
QPSK is used. Fig. 4 shows that when 16-QAM is used db2 and haar have similar performance
but far better than DFT. Fig. 5, where 64-QAM is used haar and db2 performs better than DFT
CHAPTER-6
CONCLUSION

In this paper we analyzed the performance of wavelet based OFDM system and compared it with
the performance of DFT based OFDM system. From the performance curve we have observed
that the BER curves obtained from wavelet based OFDM are better than that of DFT based
OFDM. We used three modulation techniques for implementation that are QPSK, 16 QAM and
64 QAM, which are used in LTE. In wavelet based OFDM different types of filters can be used
with the help of different wavelets available. We have used daubechies2 and haar wavelets, both
provide their best performances at different intervals of SNR.
REFERENCES

1] A. Ian F., G. David M., R. Elias Chavarria, The evolution to 4G cellular systems: LTE-
advanced, Physical communication, Elsevier, vol. 3, no. 4, pp. 217-244, Dec. 2010.

[2] B. John A. C., Multicarrier modulation for data transmission: an idea whose time has
come, IEEE Communications magazine, vol. 28, no. 5, pp. 5-14, May 1990.

[3] L. Jun, T. Tjeng Thiang, F. Adachi, H. Cheng Li, BER performance of OFDM-MDPSK
system in frequency selective rician fading and diversity reception IEEE Transactions on
Vehicular Technology, vol. 49, no. 4, pp. 1216-1225, July 2000.

[4] K. Abbas Hasan, M. Waleed A., N. Saad, The performance of multiwavelets based OFDM
system under different channel conditions, Digital signal processing, Elsevier, vol. 20, no. 2,
pp. 472-482, March 2010.

[5] K. Volkan, K. Oguz, Alamouti coded wavelet based OFDM for multipath fading channels,
IEEE Wireless telecommunications symposium, pp.1-5, April 2009. [6] G. Mahesh Kumar, S.
Tiwari, Performance evaluation of conventional and wavelet based OFDM system,
International journal of electronics and communications, Elsevier, vol. 67, no. 4, pp. 348-354,
April 2013.

[7] J. Antony, M. Petri, Wavelet packet modulation for wireless communication, Wireless
communication & mobile computing journal, vol. 5, no. 2, pp. 1-18, March 2005.

[8] L. Madan Kumar, N. Homayoun, A review of wavelets for digital wireless


communication, Wireless personal communications, Kluwer academic publishers- Plenum
publishers, vol. 37, no. 3-4, pp. 387-420, May 2006.

S-ar putea să vă placă și