Sunteți pe pagina 1din 4

EE/Stat 376B

Network Information Theory

Handout #11
November, 4, 2014

Homework Set #5 Solutions


1. Problem 6.6.
(a) The decoding scheme is equivalent to successive decoding for two DM-MAC
p(y1 |x1 , x2 ) and p(y2 |x1 , x2 ) sharing the same inputs X1 and X2 . Therefore the
rate region is given by the set of all (R1 , R2 ) such that
R1 min{I(X1 ; Y1 |X2 , Q), I(X1 ; Y2 |Q)},
R2 min{I(X2 ; Y1 |Q), I(X2 ; Y2 |X1 , Q)},
for some pmf p(q)p(x1 |q)p(x2 |q).
(b) Fix a joint distribution p(q)p(x1 |q)p(x2 |q). First, the sum rate constraint is easy
to establish since
I(X1 ; Y1 |X2 , Q) + I(X2 ; Y1 |Q) = I(X1 , X2 ; Y1 |Q),
I(X1 ; Y2 |Q) + I(X2 ; Y2 |X1 , Q) = I(X1 , X2 ; Y2 |Q).
Note that X2 is conditionally independent of X1 given Q. Thus, I(X1 ; Y2 |Q)
I(X1 ; Y2 |X2 , Q) and I(X2 ; Y1 |Q) I(X2 ; Y1 |X1 , Q). This shows that the rate
region in (a) is always contained in the simultaneous decoding inner bound.
2. Problem 6.7.
(a) The achievable rate is given as follows


 
I2
, C (S1 ) ,
R1 < min C
1 + S2
 


I1
R2 < min C
, C (S2 ) .
1 + S1
(b) The simultaneous nonunique decoding inner bound is
R1 < C(S1 ),
R2 < C(S2 ),
R1 + R2 < min{C(S1 + I1 ), C(S2 + I2 )}.
When I1 /(1+S1 ) S2 and I2 /(1+S2 ) S1 , which is equivalent to the very strong
interference conditions with Gaussian input, both regions reduce to R1 < C(S1 )
and R2 < C(S2 ).
(c) Suppose g21 = g12 = 0. Then, using successive cancellation, R1 = R2 = 0. In
contrast, using simultaneous nonunique decoding, all rate pairs (R1 , R2 ) such that
R1 + R2 min{C(S1 ), C(S2 )} can be achieved.
1

3. Problem 6.8.
For the first channel, the condition I > S implies the signals operate under the strong
interference regime. Hence the capacity region is the set of rate pairs (R1 , R2 ) such
that
R1 C(S),
R2 C(S),
R1 + R2 C(S + I).
For the second channel, from the simultaneous nonunique decoding inner bound, the
following rate region is achievable
R1 C(I),
R2 C(I),
R1 + R2 C(S + I).
Since I > S and thus C(I) > C(S), the second channel has a larger capacity region.
4. Problem 6.9.
(a) Suppose that for encoder 1, the first fraction of the time expends a power 1 P1
and the second fraction
of the time uses a power 1 P1 .
Therefore, for encoder 1, the SNR for fraction of the time is 1 S1 and the INR
it causes at decoder 2 is 1 I2 . The corresponding terms for the second encoder
are 2 S2 and 2 I1 respectively.
Let 1 S1 , 1 I2 , 2 S1 , 2 I1 , be the corresponding terms in the second time slot
.
Since the average power constraint P1 must be met, we have
1 P1 +
1 P1 = P1
Solving for 1 gives us 1 = (1 1 )/
. We obtain a similar expression for 2 .
Now we can write the capacity region as the set of (R1 , R2 ) such that




1 S
(1 1 )S
R1 < C
+
C
,
1 + 2 I

+ (1 2 )I




2 S
(1 2 )S
R2 < C
+
C
1 + 1 I

+ (1 1 )I

(b) The SND region reduces to the set of (R1 , R2 ) such that


(1 1 )S
R1 < C (1 S) +
C
,



(1 2 )S
R2 < C (2 S) +
C
,



(1 1 )S + (1 2 )I
,
R1 + R2 < C (1 S + 2 I) +
C



(1 2 )S + (1 1 )I
R1 + R2 < C (2 S + 1 I) +
C

5. Problem 6.13. For achievability, taking U1 = T and U2 = , the HanKobayashi inner


bound reduces to the set of rate pairs (R1 , R2 ) such that
R1 < I(X1 ; Y1 |Q),
R2 < H(Y2 |T, Q),
R1 + R2 < I(X1 ; Y1 |T, Q) + H(Y2 |Q)
for some pmf p(q)p(x1 |q)p(x2 |q).
The converse is as follows. Let Q Unif[1 : n] be independent of (X1n , X2n , T n , Y1n , Y2n )
and identify (X1Q , TQ , Y1Q , Y2Q ) = (X1 , T, Y1 , Y2 ). Then,
nR1 I(M1 ; Y1n ) + nn
= I(X1n ; Y1n ) + nn
n
X

I(X1i ; Y1i ) + nn


=

i=1
n
X

I(X1i ; Y1i |Q = i) + nn

i=1

= nI(X1 ; Y1 |Q) + nn .

For the second inequality, consider


nR2 I(M2 ; Y2n ) + nn
(a)

I(M2 ; Y2n |M1 ) + nn


= I(M2 , X2n ; Y2n |M1 , T n ) + nn
I(M1 , M2 , X2n ; Y2n |T n ) + nn

(b)

= H(Y2n |T n ) + nn
n
X

H(Y2i |Ti ) + nn


i=1

n
X

H(Y2i |Ti , Q = i) + nn

i=1

= nH(Y2 |T, Q) + nn ,


where (a) follows since M1 and M2 are independent and (b) follows since Y2n is a
function of (T n , X2n ). For the last inequality, consider
n(R1 + R2 ) I(X1n ; Y1n ) + I(X2n ; Y2n ) + nn
I(X1n ; T n , Y1n ) + I(X2n ; Y2n ) + nn
= I(X1n ; T n ) + I(X1n ; Y1n |T n ) + I(X2n ; Y2n ) + nn
H(T n ) + H(Y1n |T n ) H(Y1n |X1n , T n ) + H(Y2n ) H(Y2n |X2n ) + nn
(c)

= H(Y1n |T n ) H(Y1n |X1n ) + H(Y2n ) + nn


n
X


H(Y1i |Ti ) H(Y1i |X1i ) + H(Y2i ) + nn


i=1


= n H(Y1 |T, Q) H(Y1 |X1 , Q) + H(Y2 |Q) + nn

= n I(X1 ; Y1 |T, Q) + H(Y2 |Q) + nn ,

where (c) follows since H(Y2n |X2n ) = H(T n ) and T n X1n Y1n form a Markov chain.
Taking n completes the proof of the converse.

S-ar putea să vă placă și