Sunteți pe pagina 1din 16

NOV/DEC 2016

PART A

1. Compute the bandwidth of the AM signal given by S(t) = 23[1+0.8 cos (310t)
cos(23000πt)].
S(t)=23[1+0.8cos(310 t)]cos(230000πt)
ωm=310
2πfm =310
fm =310/2π
BW = 2 fm =98.67Hz

2.What are the causes of linear distortion?


OUT OF SYLLABUS

3.Illustrate Frthequencyrelationshipmodulationbetween FM and PM. Phase modulation


The maximum frequency deviation depends The maximum frequency deviation depends
upon amplitude of modulating voltage and only on amplitude of modulating voltage.
modulating frequency.

Noise immunity is better than PM. Noise immunity is better than AM but worse
than FM.

Frequency of the carrier is modulated by Phase of the carrier is modulated by


modulating signal. modulating signal.

4.What is meant by detection? Name the methods for detecting FM signals.

Demodulation is the process by which the modulating signal is recovered from the modulated
signal. It is the reverse process of modulation.

Methods:
Foster seeley method
Ratio detector
Balanced slope detector

5.Define a random variable. Specify the sample space and the random variable for a coin
tossing experiment.
Refer nov / dec 2012 Q.NO .5

6.Give the definition of noise equivalent temperature.


Refer may/june 2016 Q.NO.7
7.Determine the range of tuning of a local oscillator of a super heterodyne receiver when
fLO>fc. The broadcast frequency range is 540Hz to 1600Hz. Assume Flf =455KHz.
F0 = fc+fIF
=(540Hz to 1600Hz)+455kHz
=455.54 to 456.6kHz

8.What is capture effect in FM?


When the interference signal and FM input are of equal strength, the receiver fluctuates back and forth
between them. This phenomenon is known as the capture effect.

9.A source generates three messages with probability 0.5 ,0.25 ,0.25 .Calculate source
entropy.
H=Pk log2 (1/Pk)
= 0.5 log2 (1/0.5)+ 0.25 log2 (1/0.25)+ 0.25 log2 (1/0.25)
=1.5 bits/symbol

10.State the advantages of LZ coding.


OUT OF SYLLABUS
PART B

11.(a) (i) Draw an envelope detector circuit used for demodulation of AM and explain its
operation.

A peak detector is a series connection of a diode and a capacitor outputting a DC voltage equal to the
peak value of the applied AC signal. The circuit is shown in Figure below with the corresponding SPICE
net list. An envelope detector is an electronic circuit that takes a high-frequency signal as input and
provides an output which is the envelope of the original signal. The capacitor in the circuit stores up
charge on the rising edge, and releases it slowly through the resistor when the signal falls. The diode in
series rectifies the incoming signal, allowing current flow only when the positive input terminal is at a
higher potential than the negative input terminal.
Most practical envelope detectors use either half-wave or full-wave rectification of the signal to convert
the AC audio input into a pulsed DC signal. Filtering is then used to smooth the final result. This filtering
is rarely perfect and some "ripple" is likely to remain on the envelope follower output, particularly for low
frequency inputs such as notes from a bass guitar. More filtering gives a smoother result, but decreases
the responsiveness; thus, real-world designs must be optimized for the application.

An AC voltage source applied to the peak detector, charges the capacitor to the peak of the input. The diode
conducts positive “half cycles,” charging the capacitor to the waveform peak. When the input waveform
falls below the DC “peak” stored on the capacitor, the diode is reverse biased, blocking current
flow from capacitor back to the source. Thus, the capacitor retains the peak value even as the waveform
drops to zero. Another view of the peak detector is that it is the same as a half-wave rectifier with a filter
capacitor added to the output.
It takes a few cycles for the capacitor to charge to the peak as in Figure below due to the series resistance
(RC “time constant”). Why does the capacitor not charge all the way to 5 V? It would charge to 5 V if an
“ideal diode” were obtainable. However, the silicon diode has a forward voltage drop of 0.7 V which
subtracts from the 5 V peak of the input.
For a better detection of the modulated signal with the diode detector below, one requirement is that the
time constant of the RC filter conforms to:

1/ωc ≤ RC ≤ √1-µ 2 /( ωmµ)

where:

 Firstly, the formula states that RC has to be equal to or greater than 1ωc.
 If the RC time constant were too short there would be significant levels (ripple) of the carrier
frequency on the output - this is not what is wanted from a diode detector (or an AC rectifier in a
power supply) BUT, it's never going to be a perfect brick wall filter and so carrier ripple has to be
acceptable (to some degree).Personally, I would like to see the RC time constant 5 times greater
than 1ωc.
 At the other end of the scale, RC cannot be too big or it will start to significantly attenuate high
frequencies in the "detected" analogue waveform that is represented by 1ωm.

(ii)How SSB can be generated using Weaver’s method ? Illustrate with a neat block
diagram.

In radio communications, single-sideband modulation (SSB) or single-sideband suppressed-


carrier modulation (SSB-SC) is a refinement of amplitude modulation which uses
transmitterpower and bandwidth more efficiently. Amplitude modulation produces an output
signal that has twice the bandwidth of the original baseband signal. Single-sideband modulation
avoids this bandwidth doubling, and the power wasted on a carrier, at the cost of increased
device complexity and more difficult tuning at the receiver.
in that case, the "90°" label should more correctly be "-90° because in the time domain a Hilbert
transformer shifts a sinusoid by –90°." In Figure 1, assuming the rightmost 90° rectangle means
some sort of 90° phase-delay element, then it's output would not be sin(ωcT), it would be -
sin(ωcT). Ambiguous "90°" notation often occurs in the literature of SSB systems.

The purpose of a remote SSB receiver is to demodulate that transmitted SSB signal,
generating the baseband audio signal given in Figure . The analog demodulated baseband signal
can then be amplified and drive a loudspeaker.
(OR)
(b)(i) What is frequency division multiplexing? Explain.
OUT OF SYLLABUS
(ii) Compare various AM systems.
Refer nov/dec 2012 Q.NO.11(b)
12.(a) WBFM modulator used to transmit audio signals containing frequencies in the range
100 Hz to15kHz. The desired FM signal at the transmitter output is to have a carrier
frequency of 100MHz and a minimum frequency deviation of 75kHz. Assume the
modulation index β=0.2 radians for NBFM. Find the frequency multiplier values N1 ,N2
and values of carrier frequency and frequency deviation at the various points in WBFM
modulator.
(OR)

(b) Draw the circuit diagram of a Foster seeley discriminator and explain its working with
relevant phasor diagrams.
Refer Important questions key Q.NO.3
13.(a)(i) List the different types of random process and give the definitions.

Consider an experiment in which we flip 10 coins, and we want to know the number of
coins that come up heads. Here, the elements of the sample space I are 10-length sequences of
heads and tails. For example, we might have wO = (H, H, T, H, T, H, H, T, T , T ) E I. However,
in practice, we usually do not care about the probability of obtaining any particular sequence of
heads and tails. Instead we usually care about real-valued functions of outcomes, such as the
number of the number of heads that appear among our 10 tosses, or the length of the longest run
of tails. These functions, under some technical conditions, are known as random variables.
More formally, a random variable X is a function X μ I −→ R.2 Typically, we will denote
random variables using upper case letters X (ω) or more simply X (where the dependence on the
random outcome ω is implied). We will denote the value that a random variable may take on
using lower case letters x.
Example: In our experiment above, suppose that X (ω) is the number of heads which occur
in the sequence of tosses ω. Given that only 10 coins are tossed, X (ω) can take only a finite
number of values, so it is known as a discrete random variable. Here, the probability of the set
associated with a random variable X taking on some specific value k is
P (X = k) := P ({ω μ X (ω) = k}).
Example: Suppose that X (ω) is a random variable indicating the amount of time it takes
for a radioactive particle to decay. In this case, X (I) takes on a infinite number of possible
values, so it is called a continuous random variable. We denote the probability that X takes on a
value between two real constants a and b (where a < b) as
P (a ≤ X ≤ b) μ= P ({ω μ a ≤ X (ω) ≤ b}).

Cumulative distribution functions


In order to specify the probability measures used when dealing with random variables, it is often
convenient to specify alternative functions (CDFs, PDFs, and PMFs) from which the probability
measure governing an experiment immediately follows. In this section and the next two sections,
we describe each of these types of functions in turn.
A cumulative distribution function (CDF) is a function FX μ R → [0, 1] which specifies a
proba-bility measure as,
FX (x) , P (X ≤ x). (1)

By using this function one can calculate the probability of any event in F .3 Figure 1 shows a
sample CDF function.
Probability mass functions
When a random variable X takes on a finite set of possible values (i.e., X is a discrete random
variable), a simpler way to represent the probability measure associated with a random variable
is to directly specify the probability of each value that the random variable can assume. In
particular, a probability mass function (PMF) is a function pX μ I → R such that
pX (x) , P (X = x).
In the case of discrete random variable, we use the notation V al(X ) for the set of possible values
that the random variable X may assume. For example, if X (ω) is a random variable indicating
the number of heads out of ten tosses of coin, then V al(X ) = {0, 1, 2, . . . , 10}.
Variance
The variance of a random variable X is a measure of how concentrated the distribution of a
random variable X is around its mean. Formally, the variance of a random variable X is defined
as
V ar[X ] , E[(X - E(X ))2 ]
Using the properties in the previous section, we can derive an alternate expression for the
variance:
E[(X - E[X ])2 ] = E[X 2 - 2E[X ]X + E[X ]2 ]
= E[X 2 ] - 2E[X ]E[X ] + E[X ]2
= E[X 2 ] - E[X ]2 ,
where the second equality follows from linearity of expectations and the fact that E[X ] is
actually a constant with respect to the outer expectation.
Properties
1. V ar[a] = 0 for any constant a e R.
2. V ar[af (X )] = a2 V ar[f (X )] for any constant a e R
Some common random variables
Discrete random variables
• X ∼ Bernoulli (p) (where O ≤ p ≤ 1)μ one if a coin with heads probability p comes up
heads, zero otherwise.
p if p = 1
p(x) = 1 - p if p = O
•X ∼ Binomial (n, p) (where O ≤ p ≤ 1)μ the number of heads in n independent flips of a
coin with heads probability p.
p(x) =nx
px (1 - p)n−x
•X ∼ Geometric (p) (where p > O): the number of flips of a coin with heads probability p
until the first heads.
p(x) = p(1 - p)x−1
• X ∼ Poisson ( ) (where > O)μ a probability distribution over the nonnegative integers used
for modeling the frequency of rare events.
p(x) = e− x!
Continuous random variables
• X ∼ Uniform (a, b) (where a < b): equal probability density to every value between a
and b on the real line.
f (x) = b−a if a ≤ x ≤ b O otherwise
• X ∼ Exponential ( ) (where > O)μ decaying probability density over the nonnegative
reals. f (x) = e− x if x > O O otherwise
• X ∼ Normal ( , σ2 ): also known as the Gaussian distribution
f (x) = √ e 2πσ − 2σ2 (x− )2
Figure : PDF and CDF of a couple of random variables

Two random variables


Thus far, we have considered single random variables. In many situations, however, there may
be more than one quantity that we are interested in knowing during a random experiment. For
instance, in an experiment where we flip a coin ten times, we may care about both X (ω) = the
number of heads that come up as well as Y (ω) = the length of the longest run of consecutive
heads. In this section, we consider the setting of two random variables.
Joint and marginal distributions
Suppose that we have two random variables X and Y . One way to work with these two random
variables is to consider each of them separately. If we do that we will only need FX (x) and FY
(y). But if we want to know about the values that X and Y assume simultaneously during
outcomes of a random experiment, we require a more complicated structure known as the joint
cumulative distribution function of X and Y , defined by FX Y (x, y) = P (X x, Y y)

It can be shown that by knowing the joint cumulative distribution function, the probability of any
event involving X and Y can be calculated. The joint CDF FX Y (x, y) and the joint distribution
functions FX (x) and FY (y) of each variable separately are related by
FX (x) = lim FX Y (x, y)dy y→∞
FY (y) = lim FX Y (x, y)dx. x→∞
Here, we call FX (x) and FY (y) the marginal cumulative distribution functions of FX Y (x, y).
Properties
limx,y→∞ FX Y (x, y) = 1.
limx,y→—∞ FX Y (x, y) = o.
FX (x) = limy→∞ FX Y (x, y).
Joint and marginal probability mass functions
If X and Y are discrete random variables, then the joint probability mass function
PX Y μ R ×R → [o, 1] is defined by
PX Y (x, y) = P (X = x, Y = y).
Here, o PX Y (x, y) 1 for all x, y, and
Px∈V al(X ) Py∈V al(Y ) PX Y (x, y) = 1.
How does the joint PMF over two variables relate to the probability mass function for each
variable separately? It turns out that
PX (x) = X pX Y (x, y). y
and similarly for pY (y). In this case, we refer to pX (x) as the marginal probability mass
function of X . In statistics, the process of forming the marginal distribution with respect to one
variable by summing out the other variable is often known as marginalization.‖
Joint and marginal probability density functions
Let X and Y be two continuous random variables with joint distribution function FX Y . In the
case that FX Y (x, y) is everywhere differentiable in both x and y, then we can define the joint
probability density function,
fX Y (x, y) =∂2 FX Y (x, y) ∂x∂y
Like in the single-dimensional case,
fX Y (x, y) = P (X = x, Y = y), but rather Z Z x∈A
fX Y (x, y)dxdy = P ((X, Y ) e A).
Note that the values of the probability density function fX Y (x, y) are always nonnegative, but
they may be greater than 1. Nonetheless, it must be the case that R ∞ —∞
R ∞ fX Y (x, y) = 1.—∞
Analagous to the discrete case, we define
fX (x) = Z ∞ fX Y (x, y)dy, —∞
as the marginal probability density function (or marginal density) of X , and similarly for fY (y).
Conditional distributions
Conditional distributions seek to answer the question, what is the probability distribution over Y
, when we know that X must take on a certain value x? In the discrete case, the conditional
probability mass function of X given Y is simply
pX Y (x, y) assuming that pX (x) _ o. pY |X (y|x) _ p (x) ,
Expectation and covariance
Suppose that we have two discrete random variables X, Y and g : R2 -→ R is a function of these
two random variables. Then the expected value of g is defined in the following way,
E[g(X, Y )] , X
X g(x, y)pX Y (x, y).
xEV al(X ) yEV al(Y )
For continuous random variables X, Y , the analogous expression is
Z ∞ Z ∞ E[g(X, Y )] _ —∞ —∞ g(x, y)fX Y (x, y)dxdy.
We can use the concept of expectation to study the relationship of two random variables with
each other. In particular, the covariance of two random variables X and Y is defined as
C ov[X, Y ] , E[(X - E[X ])(Y - E[Y ])]
Using an argument similar to that for variance, we can rewrite this as,
C ov[X, Y ] -E[(X - E[X ])(Y - E[Y ])]
E[X Y - X E[Y ] - Y E[X ] + E[X ]E[Y ]]
E[X Y ] - E[X ]E[Y ] - E[Y ]E[X ] + E[X ]E[Y ]]
E[X Y ] - E[X ]E[Y ].
Here, the key step in showing the equality of the two forms of covariance is in the third equality,
where we use the fact that E[X ] and E[Y ] are actually constants which can be pulled out of the
expectation. When C ov[X, Y ] _ o, we say that X and Y are uncorrelated5 .

Properties
(Linearity of expectation) E[f (X, Y ) + g(X, Y )] _ E[f (X, Y )] + E[g(X, Y )].
V ar[X + Y ] _ V ar[X ] + V ar[Y ] + 2C ov[X, Y ].
If X and Y are independent, then C ov[X, Y ] _ o.
If X and Y are independent, then E[f (X )g(Y )] _ E[f (X )]E[g(Y )].
(ii) Write short notes on shot noise.
Refer Important questions key Q.NO.5
(OR)

(b)(i) Write the definition, power spectral density and autocorrelation function for white
noise and narrow band noise.
Refer model set 2 Q.NO.3(a)(i) & Refer Important questions key Q.NO.5

(ii) What causes thermal noise in a material? Write the expression for RMS value of the
noise.
Refer Important questions key Q.NO.5
14.(a)(i) Sketch the block diagram of DSB-SC AM system and derive the figure of merit.
Refer april/may 2015 Q.NO 14(a)

(ii) Using super heterodyne principle, draw the block diagram of AM radio receiver and
briefly explain it.
Refer may/june 2013 Q.NO.14(a)(i)
(OR)

(b) Discuss the effects of noise on the carrier in a FM receiver with suitable mathematical
diagrams.

Refer nov/dec 2012 Q.NO.12(b)


15.(a) A DMS has following alphabet with probability of occurrence as shown below.
Symbol s0 s1 s2 s3 s4 s5 s6
Probability 0.125 0.0625 0.25 0.0625 0.125 0.125 0.25

Generate the Huffman code with minimum code variance. Determine the code variance
and code efficiency.
Refer april/may 2015 Q.NO. 15(a)(i)
(OR)

(b) Derive Shannon Hartley theorem for the channel capacity of a continuous channel
having an average power limitation and perturbed by an additive band limited white
Gaussian noise. Explain the band width signal to noise ratio tradeoff for this theorem.
The ultimate “application” of information theoretic concepts to transmission systems and
their design and understanding.
Let us consider a case for which
 Source rate R = rH(X) [bits per second]
 Channel capacity C = sCs [bits per second]
 R <C

If the source signal is a bit-stream and if the source bit-rate is lower than the channel capacity,
then channel coding can be designed to provide arbitrarily low bit error rate (BER).
In practice, very low error rates may not be possible in case of noisy channels because the
processing delay and implementation complexity might become very high. Notice also that the
above channel capacity theorem does not specify in any way how this could be done in practice.
Considering all possible multi-level and multi-phase encoding techniques, the Shannon–Hartley
theorem states the channel capacity C, meaning the theoretical tightest upper bound on the
information rate (excluding error correcting codes) of clean (or arbitrarily low bit error rate) data
that can be sent with a given average signal power S through an analog communication channel
subject to additive white Gaussian noise of power N, is:

where
C is the channel capacity in bits per second;
B is the bandwidth of the channel in hertz (passband bandwidth in case of a modulated signal);
S is the average received signal power over the bandwidth (in case of a modulated signal, often
denoted C, i.e. modulated carrier), measured in watts (or volts squared);
N is the average noise or interference power over the bandwidth, measured in watts (or volts
squared); and
S/N is the signal-to-noise ratio (SNR) or the carrier-to-noise ratio (CNR) of the communication
signal to the Gaussian noise interference expressed as a linear power ratio (not as logarithmic
decibels).
The square root effectively converts the power ratio back to a voltage ratio, so the number of
levels is approximately proportional to the ratio of rms signal amplitude to noise standard
deviation. This similarity in form between Shannon's capacity and Hartley's law should not be
interpreted to mean that M pulse levels can be literally sent without any confusion; more levels
are needed, to allow for redundant coding and error correction, but the net data rate that can be
approached with coding is equivalent to using that M in Hartley's law.

For large or small and constant signal-to-noise ratios, the capacity formula can be approximated:

 If S/N >> 1, then

where

 Similarly, if S/N << 1, then

In this low-SNR approximation, capacity is independent of bandwidth if the noise is white,


of spectral density watts per hertz, in which case the total noise power is .

S-ar putea să vă placă și