Sunteți pe pagina 1din 5

EE 179

Fourier Transform and Applications


Sample Final Examination Solutions

June 5, 2014
Handout #46

1. Choice of modulating signal.


a. Throughout the course, the modulating carrier signal for both AM and FM was chosen to
be cos 2fc t. Why is cosine preferable to sine?
b. Suppose that we modulate with a nonsinusoidal periodic signal, such as a square wave or a
sawtooth function. What changes should be made to the modulating system?
Solution
a. Since cosine is real and even, its Fourier transform is also real and even. Moreover, both
impulses in its Fourier transform have positive area. So the spectra of cosine modulated
signals are easy to sketch.
b. In general, periodic signals have frequency content at all multiples of the fundamental frequency, including a possible DC component. Therefore the product of the message signal
and the modulating signal should be bandpass filtered. This filtering is not needed by the
receiver but keeps the transmitter output out of unauthorized frequency bands.
2. Overmodulation. A signal m(t) = sin 2t is transmitted using AM modulation:
AM (t) = (1 + km(t)) cos 20t .
a. Does the bandwidth of AM (t) depend on the modulation index k?
b. Sketch the envelope of AM (t) for k = 1.2.
c. Can the signal m(t) be recovered when k > 1? Explain briefly.
Solution
a. No. The spectrum of AM (t) consists of impulses at 10 Hz (carrier) and at 9 Hz and
11 Hz (sidebands). The modulation index affects the amplitude of the sidebands but not
the signal bandwidth.
b. The modulated signal and its envelope are shown below. Note the phase changes in the
modulated signal at ( + sin1 (1/1.2))/2 0.66 and (2 sin1 (1/1.2))/2 0.84.
2.5
2
1.5
1
0.5
0
0.5
1
1.5
2
2.5

0.2

0.4

0.6

0.8

1.2

1.4

1.6

1.8

c. Although an overmodulated signal cannot be recovered using envelope detection, a standard


coherent demodulator works. The local carrier can be kept in phase using the transmitted
carrier, although phase reversals in the received signal at the overmodulation points.
3. Quadrature demodulation. The main motivation for the in-phase and quadrature representation
of modulated signals is to utilize the fact that sine and cosine are orthogonal signals, and thus
can be used to carry independent information over a channel. In particular, a modulated signal
s(t) = sI (t) cos(2fc t) sQ (t) sin(2fc t) modulated with different information signals sI (t) and
sQ (t) can be demodulated such that both information signals can be detected in the receiver.
This is especially useful for digital signals. Consider the quadrature demodulator system shown
in the following figure. Assume n(t) = 0.
cos(2fc t + 1 )
n(t)

s(t)

Low-pass
filter

sI (t)

Low-pass
filter

sQ (t)

sin(2fc t + 2 )

a. The phase offsets in the carriers are nonzero but known. Express the outputs of the demodulators in terms of the original signals sI (t) and sQ (t).
b. Are there values of 1 and 2 such that sI (t) and sQ (t) cannot be recovered from the
demodulator output?
c. Suppose that 1 and 2 are small. Find formulas for sI (t) and sQ (t) in terms of sI (t) and
sQ (t) that are linear in 1 and 2 .
Solution
a. Using trigonometric identities we can write the demodulator carrier signals as
g1 (t) = cos(2fc t + 1 ) = cos 1 cos(2fc t) sin 1 sin(2fc t)

g2 (t) = sin(2fc t + 2 ) = sin 2 cos(2fc t) cos 2 sin(2fc t)


Therefore
s(t)g1 (t) = cos 1 cos2 (2fc t)sI (t) cos 1 cos(2fc t) sin(2fc t)sQ (t)
sin 1 sin(2fc t) cos(2fc t)sI (t) + sin 1 sin2 (2fc t)sQ (t)
= ( cos 1 sQ (t) sin 1 sI (t)) sin(4fc t)
+ 12 cos 1 sI (t)(1 + cos(4fc t)) + 12 sin 1 sQ (t)(1 cos(4fc t))
Similarly,
s(t)g2 (t) = sin 2 cos2 (2fc t)sI (t) + sin 2 cos(2fc t) sin(2fc t)sQ (t)
cos 2 sin(2fc t) cos(2fc t)sI (t) + cos 2 sin2 (2fc t)sQ (t)
= (sin 2 sQ (t) cos 2 sI (t)) sin(4fc t)
21 sin 2 sI (t)(1 + cos(4fc t)) + 21 cos 2 sQ (t)(1 cos(4fc t))
Page 2 of 5

EE 179, Spring 2014

A low-pass filter with cutoff frequency between 1 Hz and 2 Hz eliminates the 2 Hz terms
sin(4fc t) and cos(4fc t). Thus the outputs of the low-pass filters are
sI (t) = 12 cos 1 sI (t) + 21 sin 1 sQ (t)
sQ (t) = 21 sin 2 sI (t) + 12 cos 2 sQ (t)

b. In part (a) we obtained two linear equations in two unknowns, sI (t) and sQ (t), in terms of
two knowns, sI (t) and sQ (t).
"
#

 
1
1
cos

sin

sI (t)
sI (t)
1
1
2
2
=
sQ (t)
12 sin 2 12 cos 2 sQ (t)
The determinant of the coefficient matrix is
1
2

cos 1 21 cos 2 + 12 sin 1 21 sin 2 =

1
4

cos(1 2 ) = sin(1 2 + /2) .

The determinant is nonzero unless 2 = 1 + /2. When 2 = 1 + /2 the two demodulator


carriers are the same, so we get only one independent estimate of sI (t) and sQ (t).
c. When is small, sin and cos 1 21 2 . The first-order approximation to the
coefficient matrix of part (b) and its inverse are
"
"
#
"
#
#
1
1
1
1
2
2

4
1
1
2
2 1
2
2 1
,
A=
A1 =

1
1
1
1
1 + 1 2 2 2
1 22
2
2 2 2
2
since 1 2 1 1. This gives the small phase offset estimate:
sI (t) = 2
sI (t) 21 sQ (t)

sQ (t) = 2
sQ (t) + 22 sI (t)

4. Sine from sinc. By the sampling theorem (page 306 in the textbook), sin t can be reconstructed
from samples at t = k/2, k = 0, 1, 2, . . . (twice the Nyquist rate).
a. Write sin t in terms of the sample values and the function sin x/x.

b. Use part (a) to find a series for sin /4 = 2/2.


Solution
a. Equation 6.10 on page 306 shows how a band-limited signal can be expressed as a sum of
shifted sinc functions. If Ts = 1/2B where the signal bandwidth is less than B, then
X
X
g(kTs ) sinc(2Bt k) ,
g(kTs ) sinc(2B(t kTs )) =
g(t) =
k

where we define sinc x = sin x/x. In this problem Ts = /2 B = 1/2Ts = 1/


2B = 2.
X
X
X
k1
sin t =
sin(k/2) sinc(2t k) =
(1) 2 sinc(2t k) =
(1)l sinc(2t (2l + 1))
odd k

b. If we let t = /2, then we obtain an infinite series related to :


sin(/4) =

X
l

(1)l sinc(/2 (2l + 1)) =

Sample Final Examination Solutions

X
X
(1)l sinc(2l + /2) =
l

(1)l
2l + /2

Page 3 of 5

If we reorder this series by increasing denominator, which means alternately positive and
negative l, we obtain
2
2
2
2 2 2 2 2

= + + +

+
1 3 5 7 9 11 13 15
2
This result is interesting but not directly applicable to either digital or analog communication.
5. Ramp random process.
a. Let X(t) be a random process with P{X(t) = t} = 21 and P{X(t) = 2 at} =
Find the mean and autocorrelation functions of X(t).

1
2

for every t.

b. For what value(s) of a is X(t) a WSS random process?


Solution
a. The mean and autocorrelation are
E(X(t)) = 21 (t) + 12 (2 at) = 1 + 21 (1 a)t

RX (t1 , t2 ) = E(X(t1 )X(t2 ))


= 41 (t1 t2 ) + 14 (t1 (2 at2 )) + 41 ((2 at1 )t2 ) + 14 ((2 at1 )(2 at2 ))
= 41 (t1 t2 + 2t1 at1 t2 + 2t2 at1 t2 + 4 2at2 2at1 + a2 t1 t2 )
= 41 (4 + 2(1 a)t1 + 2(1 a)t2 + 2(1 a)2 t1 t2 )
= 1 + 21 (1 a)(t1 + t2 + (1 a)t1 t2 )

b. The mean depends on a unless a = 1, so X(t) is not WSS if a 6= 1. On the other hand, if
a = 1 then E(X(t)) = 1 and RX (t1 , t2 ) = RX (t1 t2 ) = 1. Thus X(t) is WSS if and only if
a = 1.
6. Error control coding for AWGN channel. Binary data is sent over a communication link with
additive white Gaussian noise. The measured bit error rate is 1%.
a. Find the signal-to-noise ratio corresponding to bit error probability 0.01.
b. To reduce the error probability, each bit is transmitted three times, and the receiver uses
majority vote to estimate the bit. Find the error probability for this coding method.
c. A more advanced receiver adds the analog values of the three received signals and decides
based on Y < 0 or Y > 0:
Y = Y1 + Y2 + Y3 = (X1 + Z1 ) + (X2 + Z2 ) + (X3 + Z3 ) ,
where X1 , X2 , X3 are the three copies of the transmitted bit and Z1 , Z2, Z3 are three independent noise random variables. Find the signal-to-noise ratio and the corresponding bit
error probability.
Solution
a. The
Pe =
error probability for a binary communication channnel with Gaussian noise is02
Q( SNR). From the table of the Q() function we see that Q(2.3) = 1.0724 10 , so
SNR (2.3)2 5.3.
b. An error occurs when two or three of the three bit copies are incorrect. Using the binomial

Page 4 of 5

EE 179, Spring 2014

probability distribution,
 
 
3
3
2
(0.01)3 = 3 0.99 104 + 1 106 3.0104 .
(0.01) (0.99) +
Pe =
3
2
Since Pe Q(3.4), the effective SNR of this coding method is (3.4)2 = 11.6, which is only
about twice that of sending a single bit.
c. Since X1 = X2 = X3 , we can write Y as Y = 3X1 + Z , where Z = Z1 + Z2 + Z3 . The signal
power is E((3X1 )2 ) = 9E(X12 ). Since the noise r.v.s are independent, the noise power is
E(Z 2 ) = E((Z1 + Z2 + Z3 )2 ) = E(Z12 ) + E(Z22 ) + E(Z32 ) = 3E(X12 )
2
2
2
Thus the signal-to-noise ratio of Y = 3X1 + Z is 9E(X12 )/3E(Z
1 ) = 3E(X1 )/3E(Z1 ). The
error probability corresponding to SNR = 3 5.3 = 15.9 is Q( 15.9) Q(4.0) = 3.2105.

Sample Final Examination Solutions

Page 5 of 5

S-ar putea să vă placă și