Documente Academic
Documente Profesional
Documente Cultură
June 5, 2014
Handout #46
0.2
0.4
0.6
0.8
1.2
1.4
1.6
1.8
s(t)
Low-pass
filter
sI (t)
Low-pass
filter
sQ (t)
sin(2fc t + 2 )
a. The phase offsets in the carriers are nonzero but known. Express the outputs of the demodulators in terms of the original signals sI (t) and sQ (t).
b. Are there values of 1 and 2 such that sI (t) and sQ (t) cannot be recovered from the
demodulator output?
c. Suppose that 1 and 2 are small. Find formulas for sI (t) and sQ (t) in terms of sI (t) and
sQ (t) that are linear in 1 and 2 .
Solution
a. Using trigonometric identities we can write the demodulator carrier signals as
g1 (t) = cos(2fc t + 1 ) = cos 1 cos(2fc t) sin 1 sin(2fc t)
A low-pass filter with cutoff frequency between 1 Hz and 2 Hz eliminates the 2 Hz terms
sin(4fc t) and cos(4fc t). Thus the outputs of the low-pass filters are
sI (t) = 12 cos 1 sI (t) + 21 sin 1 sQ (t)
sQ (t) = 21 sin 2 sI (t) + 12 cos 2 sQ (t)
b. In part (a) we obtained two linear equations in two unknowns, sI (t) and sQ (t), in terms of
two knowns, sI (t) and sQ (t).
"
#
1
1
cos
sin
sI (t)
sI (t)
1
1
2
2
=
sQ (t)
12 sin 2 12 cos 2 sQ (t)
The determinant of the coefficient matrix is
1
2
1
4
4
1
1
2
2 1
2
2 1
,
A=
A1 =
1
1
1
1
1 + 1 2 2 2
1 22
2
2 2 2
2
since 1 2 1 1. This gives the small phase offset estimate:
sI (t) = 2
sI (t) 21 sQ (t)
sQ (t) = 2
sQ (t) + 22 sI (t)
4. Sine from sinc. By the sampling theorem (page 306 in the textbook), sin t can be reconstructed
from samples at t = k/2, k = 0, 1, 2, . . . (twice the Nyquist rate).
a. Write sin t in terms of the sample values and the function sin x/x.
X
l
X
X
(1)l sinc(2l + /2) =
l
(1)l
2l + /2
Page 3 of 5
If we reorder this series by increasing denominator, which means alternately positive and
negative l, we obtain
2
2
2
2 2 2 2 2
= + + +
+
1 3 5 7 9 11 13 15
2
This result is interesting but not directly applicable to either digital or analog communication.
5. Ramp random process.
a. Let X(t) be a random process with P{X(t) = t} = 21 and P{X(t) = 2 at} =
Find the mean and autocorrelation functions of X(t).
1
2
for every t.
b. The mean depends on a unless a = 1, so X(t) is not WSS if a 6= 1. On the other hand, if
a = 1 then E(X(t)) = 1 and RX (t1 , t2 ) = RX (t1 t2 ) = 1. Thus X(t) is WSS if and only if
a = 1.
6. Error control coding for AWGN channel. Binary data is sent over a communication link with
additive white Gaussian noise. The measured bit error rate is 1%.
a. Find the signal-to-noise ratio corresponding to bit error probability 0.01.
b. To reduce the error probability, each bit is transmitted three times, and the receiver uses
majority vote to estimate the bit. Find the error probability for this coding method.
c. A more advanced receiver adds the analog values of the three received signals and decides
based on Y < 0 or Y > 0:
Y = Y1 + Y2 + Y3 = (X1 + Z1 ) + (X2 + Z2 ) + (X3 + Z3 ) ,
where X1 , X2 , X3 are the three copies of the transmitted bit and Z1 , Z2, Z3 are three independent noise random variables. Find the signal-to-noise ratio and the corresponding bit
error probability.
Solution
a. The
Pe =
error probability for a binary communication channnel with Gaussian noise is02
Q( SNR). From the table of the Q() function we see that Q(2.3) = 1.0724 10 , so
SNR (2.3)2 5.3.
b. An error occurs when two or three of the three bit copies are incorrect. Using the binomial
Page 4 of 5
probability distribution,
3
3
2
(0.01)3 = 3 0.99 104 + 1 106 3.0104 .
(0.01) (0.99) +
Pe =
3
2
Since Pe Q(3.4), the effective SNR of this coding method is (3.4)2 = 11.6, which is only
about twice that of sending a single bit.
c. Since X1 = X2 = X3 , we can write Y as Y = 3X1 + Z , where Z = Z1 + Z2 + Z3 . The signal
power is E((3X1 )2 ) = 9E(X12 ). Since the noise r.v.s are independent, the noise power is
E(Z 2 ) = E((Z1 + Z2 + Z3 )2 ) = E(Z12 ) + E(Z22 ) + E(Z32 ) = 3E(X12 )
2
2
2
Thus the signal-to-noise ratio of Y = 3X1 + Z is 9E(X12 )/3E(Z
1 ) = 3E(X1 )/3E(Z1 ). The
error probability corresponding to SNR = 3 5.3 = 15.9 is Q( 15.9) Q(4.0) = 3.2105.
Page 5 of 5