Sunteți pe pagina 1din 19

A Comparison Analysis of Hexagonal Multilevel QAM and

Rectangular Multilevel QAM 1

Karin Engdahl and Kamil Sh. Zigangirov


Department of Information Technology
Telecommunication Theory Group
Lund University, Box 118, S-221 00 Lund, Sweden
Phone +46 46 2223450, Fax +46 46 2224714, E-mail karin@it.lth.se

Abstract{ The performances of two multilevel modulation systems are compared


in terms of capacity and cuto rate, under the condition that the average energy
per channel use is the same for the two systems. The systems are versions of the
scheme of Imai and Hirakawa, and we consider the multistage suboptimal receiver
for the Gaussian channel. The rst system is an eight level modulation system
using rectangular QAM-signaling and a binary alphabet, the second is a ve level
modulation system using hexagonal QAM-signaling and a ternary alphabet. It is
shown that, under these conditions, the capacity for the hexagonal system is close
to 1 dB better than the capacity for the rectangular system for a range of signal-
to-noise ratios. The Cherno bounding parameter is also calculated for hexagonal
QAM with an in nite number of signal points.

Keywords{ Multilevel modulation, hexagonal QAM-signaling, rectangular QAM-


signaling, QAM-signaling, set partitioning, ternary signaling.

1 This work was supported in part by Swedish Research Council for Engineering Sciences under
Grant 95-164

1
I. INTRODUCTION
The principle of trellis-coded modulation, and the concept of set partitioning
was described by Ungerbck in his paper of 1982 [7]. In [4] Imai and Hirakawa
proposed a coded modulation scheme in which the labels on the branches from one
level of the partition chain to the next are encoded by independent codes. This
scheme enables the usage of a suboptimal multistage decoder, which demonstrates
performance/complexity advantage over the maximum likelihood decoder.
The aim of this work was to compare the capacities of an eight level modulation
scheme using rectangular QAM-signaling and a ve level modulation scheme using
hexagonal QAM-signaling. Both schemes are versions of the one proposed in [4]. We
consider the discrete memoryless Gaussian channel. The multistage decoder consists
of a set of suboptimal decoders matched to the codes used on the corresponding levels
of encoding, and uses as metric the squared Euclidean distance from the received
point to the nearest points of the corresponding signal subset [2], [4].
First, in Section II, we will give a description of the systems considered. Then,
in Section III, the main results of [2] are presented. That is, the expressions of
the Cherno bounding parameter Z , the cuto rate and the capacity for rectangu-
lar QAM-signaling. The Cherno bounding parameter Z when hexagonal QAM-
signaling is used, is calculated in Section IV. The cuto rate and capacity are also
given. A comparison analysis is performed in Section V. It is shown that for cer-
tain signal-to-noise ratios the capacity of the hexagonal constellation is almost 1dB
better than the rectangular constellation.

2
II. SYSTEM DESCRIPTION
The schemes used are shown in Figures 1 and 2. An information sequence u
(binary in Figure 1, and ternary in Figure 2) is partitioned into K subsequences (in
Figure 1 K = 8, and in Figure 2 K = 5), where each subsequence is encoded by
an independent code Ck . The output code sequences are v(1) ; v(2) ; : : : ; v(K ), where
v(k) = v (k) (1) ; v (k) (2) ; : : : ; v (k) (n) ; : : : for k = 1; 2; : : : ; K: A set of K symbols,
n o
v (1) (n) ; v (2) (n) ; : : : ; v (K ) (n) , one from each code sequence are synchronously
mapped onto one of the QAM signal points, s (n). The channel considered in this
paper is the discrete memoryless Gaussian channel with the complex input sequence
s = s (1) ; s (2) ; : : : ; s (n) ; : : :, and the complex output sequence r = s + e. The se-
quence e = e (1) ; e (2) ; : : : ; e (n) ; : : : is an error sequence, e (n) = e(I ) (n)+ je(Q) (n),
where e(I ) (n) and e(Q) (n) are independent Gaussian random variables with zero
mean and variance 2 . The multistage suboptimal decoder is a modi ed version of
the one proposed in [4]. Each decoding stage consists of calculation of distances
(metrics) to the received sequence r from all possible code words on the correspond-
ing level of set partitioning. The side information from the previous decoding stages
determines, according to the set partitioning structure, the signal set upon which
the metrics are calculated.
When the decoder calculates the metrics, it uses the following suboptimal prin-
ciple. Let us suppose that a block code of length N is used on the kth level of
the encoding, and that the decoding in the previous (k 1) decoding stages de-
termines the subsets S (k 1) (1) ; S (k 1) (2) ; : : : ; S (k 1) (N ), to which the transmitted
symbols of the codeword v(k) = v(k) (1) ; v(k) (2) ; : : : ; v(k) (N ) belong. Let Si(k 1)
(n)

3
(in Figure 1 i = 0; 1, and in Figure 2 i = 0; 1; 2) be subsets of S (k 1) (n), corre-
sponding to transmission of v(k) (n) = i respectively. Let s(k) (n) 2 S (k 1) (n), n =
1; 2; : : : ; N and s(k) = s(k) (1) ; s(k) (2) ; : : : ; s(k) (N ). Finally let S(vk(k)1) = Sv(k(k)(1)
1)
(1) ;
Sv(k(k) (2)
1)
(2) ; : : : ; Sv(k(k)(1)N ) (N ) be the sequence of subsets corresponding to transmission
of the codeword v(k). Then the distance (metric) between the received sequence
r = r (1) ; r (2) ; : : : ; r (N ) and the codeword v(k) is determined as
   
 r; v(k) = (k)min(k 1) dE r; s(k) ; (1)
s 2Sv(k)

where dE (x; y) is the squared Euclidean distance between the N -dimensional vectors
x and y. The decoding consists of choosing the codeword v(k) for which the metric
 
 r; v(k) above is minimal.

III. CAPACITY AND CUTOFF RATE FOR RECTANGULAR QAM


In [2] we analyzed a multilevel modulation scheme using rectangular QAM-
signaling. The main result was the derivation of explicit expressions for the Cherno
bounding parameter Z , which is used in the calculation of error probabilities as de-
scribed below. The performance of a multilevel coded modulation system, which
employs a multistage decoder, is commonly estimated by average bit error probabil-
ity of each component code or by block error probability and burst error probability
for block and convolutional coding respectively [1], [3], [5], [6]. When a linear block
code is used on a signaling level, an upper bound on the probability of decoding
error P (") for that level is [9]

P (")  G (D) jD=Z ; (2)

4
and when a convolutional code is used, the upper bound on the burst error ( rst-
event) probability P (") is [8]

P (") < T (D) jD=Z ; (3)

where G (D) (and T (D) respectively) is the generating function of the code. The
values of Z that give exponentially tight bounds in (2) and (3) was derived in [2]
using the Cherno bounding method [10], and are given below.
On the last decoding level, where the signal constellation consists of two points,
we have
2
Z2 =e 8 ; (4)
where  is the least Euclidean distance between any two points in the signal con-
stellation (on the last level, there is only one distance) normalized with the standard
p k 1
deviation of the noise . If  = 0 on the rst decoding level, then  = 2 0
on the kth decoding level. On the last but one level, where the signal constellation
consists of four points,
2  p 
Z4 = min
s0
2e2(s) Q 2s 
! !!
 e Q p2 (2s 1) + es2 Q p2 (2s + 1) ;
s2
(5)
R 2
where Q(x) = p12 x1 e t2 dt. On each of the other levels the signal constellation
is approximated by a QAM signal set with an in nite number of signal points, and
thus we get an upper bound on Z for any level of decoding;
2 ps
Z1;R = min 4 e
s 2
s0

5
0 1 !!12
X p2sj  p  p
@ e Q 2j  s Q 2j  s + p A: (6)
j= 1 2
The cuto rate for each level can be calculated as
1 + Z 
Rc = log2
2 ; (7)

and the total cuto rate is achieved by adding the cuto rates for all the levels. The
capacity of each level is
C =H( ) H( j I) (8)
p
where = 2 (X + Y ) 2 for the case with an in nite number of signal points,
(X; Y ) are the coordinates of the received signal point, and I indicates the symbol
sent on the corresponding level. For details we refer to [2]. The total capacity is
calculated analogously to the total cuto rate.

IV. CAPACITY AND CUTOFF RATE FOR HEXAGONAL QAM


Now we consider a ternary alphabet as opposed to the binary alphabet that was
considered in the case of rectangular QAM. An example of the set partitioning in
a multilevel modulation scheme with ve levels is shown in Figure 3. On the last
decoding level, where the signal constellation consists of three signal points, the
value of the parameter Z is the same as in the case of two signal points, that is
2
Z3 =e 8 ; (9)

where  is de ned as before. Here, if  = 0 on the rst decoding level, then


p k 1
= 3 0 on the kth decoding level. Once again, the signal constellation on
all the other levels is approximated by a hexagonal QAM signal set with an in nite
number of signal points, Figure 4, and thus we get an upper bound on Z ;
6
Theorem 1 On each level of suboptimal decoding of a multilevel coded hexagonal
QAM signal set with an in nite number of signal points, the Cherno bounding
parameter Z has the value
Z =2  p 
s2
Z1;H = min
s0
e e2sx fX x; 3x dx+
0
Z  p  !
+ e2sx f X x; 3 ( x) dx ; (10)
=2

where
X 1 
1 X 2  p   p 
fX (x; b) = p3 1
e 2 (x 3j ) Q b 3k Q b 3k +
2 k = 1 j = 1
1 2
3(j + 21 ))
 p  1   p  1  : (11)
+e 2 (x Q b 3 k+
2 Q b 3 k+
2
2

Proof The set partitioning corresponding to a hexagonal QAM signal set with an
in nite number of signal points is shown in Figure 4. Without loss of generality we
suppose that the transmitted codeword on the kth level is v0(k), the all zero codeword,
corresponding to sending points from the reference set, Figure 4. We also suppose
that vl(k) is a codeword of Hamming weight wl(k). To simplify the analysis, we change
the order of the transmitted symbols, such that the rst wl(k) symbols of vl(k) are
non-zero. From [2] we have that

Z (k) = min
s0
'(k) (s) (12)

where '(k) (s) is the generating function of the metric


   
l(k) (n) = dE r (n) ; v0(k) (n) dE r (n) ; vl(k) (n) (13)
7
and where
 (k)
  
dE r (n) ; vl (n) = min
(k 1)
dE r (n) ; s(k) (n) : (14)
s(k) (n)2S (k) (n)
v l (n )

To simplify notation in the following, we leave out the superscript (k) and the
argument (n). Thus, we need to study the metric , which is the di erence between
the distances from the received point to the nearest reference point and the nearest
opposite point, to calculate the Cherno bounding parameter Z . Conditioned that
the received point, having coordinates (x; y), is in the region marked in Figure 4,
   
= x2 + y 2 ( x)2 + y 2 = 2x 2: (15)

The conditional probability density function of (X; Y ) given that a point of the
reference set was transmitted and given that the received point is in the marked
region is
3
f(X;Y ) (x; y ) =
  2 !
X
1 X
1 p
1 (x 3j )2 +(y 3k)2 1 (x 3(j + 1 ))2 +(y p3(k+ 1 ))2
 e 2 +e 2 2 2 : (16)
k= 1 j= 1
Consequently the probability density function of X with the same conditions as
above is 8  p 
< fX x; 3x ;
 0  x  2

fX (x) = :  p ; (17)
fX x; 3 ( x) ; 2 x
where
Zb
fX (x; b) = f(X;Y ) (x; y ) dy (18)
y= b

the result of which is (11). If the received point is in any other region we can
introduce an alternative system of coordinates such that we have the same situation

8
as in Figure 4, and thus (15)-(18) will still be valid. Following the technique used
in [2] we thus get
Z Z
es(2x  ) fX (x) dx;
2
Z1;H = min
s 0
s
e f ( ) d = min
s0
(19)
x=0

which is the same as in (10). 2


Numerical values of Z1;H and Z3 are shown in Table 1. There it can be seen
that when  is large the quote Z1;H =Z3 approaches 3. This is an expected result
due to the \nearest neighbor error events\ principle, and the fact that every signal
point in the constellation with in nitely many signal points has exactly three nearest
neighbors of each sort.
The cuto rate for each level is calculated as
 1 + 2Z 
Rc = log2 : (20)
3
The capacity of each level is
C = H (X; Y ) H (X; Y j I) =
Z Z 1X ! X !
=
2
(i) 1 2
(i)
3 i=0 f(X;Y ) (x; y) log2 3 i=0 f(X;Y ) (x; y) dxdy+
1X2 Z Z
(i) (i)
+ f(X;Y ) (x; y ) log2 f(X;Y ) (x; y )dxdy; (21)
3 i=0
where the integration area is the upper triangle of the region marked in Figure 4 (for
the last level, when the signal constellation consists of three points, the integration
area is R2), and f((X;Y
i)
) (x; y ) is the conditional probability density function of the

coordinates (x; y) given that the received signal point is in the integration area, and
given that the sent symbol was i, where i = 0; 1; 2. The total capacity and the total
cuto rate is computed as before.

9
V. COMPARISON ANALYSIS
We want to compare an eight level rectangular QAM system to a ve level
hexagonal QAM system, Figure 3, under the condition that the energy per channel
use is the same for the two systems. The average energy per channel use is, for the
rectangular system
ER = 10880 2
256 (R ) ; (22)
and for the hexagonal system

EH = 6942 (H )2 : (23)


243
If these energies are to be equal we get for the normalized Euclidean distances
s
H = 10880 243   1:2197 :
256 6942 R R (24)

To calculate the capacity and cuto rate in the rectangular QAM case we use the
results in [2], and approximate all but the last two levels with rectangular QAM
signal sets with an in nite number of signal points. In the case of hexagonal QAM
we approximate all but the last level with hexagonal QAM signal sets with an
in nite number of signal points, Figure 4. The resulting cuto rates and capacities
are shown in Figures 5, 6, 7 and 8 respectively. It can be seen from Figures 7 and
8 that in terms of capacity the hexagonal system is close to 1 dB better than the
rectangular, for signal-to-noise ratios between 10 dB and 25 dB. Also in terms of
cuto rate the hexagonal system is better, but the di erence here is approximately
0.7 dB.

10
References
[1] E. Biglieri, D. Divsalar, P. J. McLane and M. K. Simon, Introduction to Trellis-
Coded Modulation with Applications, Macmillan, 1991.

[2] K. Engdahl and K. Sh. Zigangirov, \On the Calculation of the Error Probability
for a Multilevel Modulation Scheme Using QAM-signaling," submitted to IEEE
Trans. Information Theory.

[3] J. Huber, \Multilevel Codes: Distance Pro les and Channel Capacity," in ITG-
Fachbereicht 130, pp. 305-319, Oct. 1994. Conference Record.

[4] H. Imai and S. Hirakawa, \A New Multilevel Coding Method Using Error-
Correcting Codes," IEEE Trans. Information Theory, vol. IT-23, pp. 371-377,
May 1977.

[5] Y. Kofman, E. Zehavi and S. Shamai, \Performance Analysis of a Multilevel


Coded Modulation System" IEEE Trans. Communications, vol. COM-42, pp.
299-312, Feb./Mar./Apr. 1994.

[6] G. Pottie and D. Taylor, \Multilevel Codes Based on Partitioning" IEEE Trans.
Information Theory, vol. IT-35, pp. 87-98, Jan. 1989.

[7] G. Ungerbck, \Channel Coding with Multilevel/Phase Signals," IEEE Trans.


Information Theory, vol. IT-28, pp. 55-67, Jan. 1982.

[8] A. J. Viterbi and J. K. Omura, Principles of Digital Communication and Cod-


ing, McGraw Hill, 1979.

11
[9] S. G. Wilson, Digital Modulation and Coding, Prentice Hall, 1996.

[10] J. M. Wozencraft and I. M. Jacobs, Principles of Communication Engineering,


Wiley, 1965.

12
LIST OF FIGURE CAPTIONS

Figure 1: System description of the eight level modulation scheme using rectangu-
lar QAM-signaling.

Figure 2: System description of the ve level modulation scheme using hexagonal


QAM-signaling.

Figure 3: Set partitioning of 243 hexagonal QAM.


Figure 4: A hexagonal QAM signal constellation with an in nite number of signal
points.

Figure 5: Comparison of the total cuto rates of the two systems, rectangular
QAM (dashed) and hexagonal QAM (solid). SNR is the average energy per
channel use to 22.

Figure 6: Detail of Figure 5.


Figure 7: Comparison of the total capacities of the two systems, rectangular QAM
(dashed) and hexagonal QAM (solid). SNR is the average energy per channel
use to 22.

Figure 8: Detail of Figure 7.


Table 1: Numerical values of Z in the case of hexagonal QAM. For numerical values
of Z in the case of rectangular QAM we refer to [2].

13
u(1) C1
v(1)
u(2) C2
v(2) 28
u Partition
of
u(3) C3
v(3) rectan-
gular
s AWGN r subopti-
mal
u^
information QAM  decoder
mapper
p

u(8) v(8)
p

C8

Figure 1:

u(1) C1
v(1)
u(2) C2
v(2) 35
u Partition
of
u(3) C3
v(3) hexa-
gonal
s AWGN r subopti-
mal
u^
information QAM  decoder
mapper
p

u(5) v(5)
p

C5

Figure 2:

14
 Z3 Z1;H Z1;H =Z3
0.5 0.9692 1.0000 1.03
1.0 0.8825 1.0000 1.13
1.5 0.7548 0.9991 1.32
2.0 0.6065 0.9747 1.61
2.5 0.4578 0.8805 1.92
3.0 0.3247 0.7190 2.21
3.5 0.2163 0.5310 2.46
4.0 0.1353 0.3572 2.64
4.5 0.0796 0.2206 2.77
5.0 0.0439 0.1258 2.86
5.5 0.0228 0.0666 2.92
6.0 0.0111 0.0328 2.96
6.5 0.0051 0.0151 2.98
7.0 0.0022 0.0065 2.99
Table 1:

15
r r r r r r

r r r r r r r r r r r

r r r r r r r r r r

r r r r r r r r r r r

r r r r r r r r r r r r r r

r r r r r r r r r r r r r

r r r r r r r r r r r r r r

r r r r r r r r r r r r r r r r r

r r r r r r r r r r r r r r r r

r r r r r r r r r r r r r r r r r

r r r r r r r r r r r r r r r r

r r r r r r r r r r r r r r r r r

r r r r r r r r r r r r r r r r

r r r r r r r r r r r r r

r r r r r r r r r r r r r r

 QQ
r r r r r r r r r r r r r

r r r r r r r r r r

 QQ
r r r r r r r r r r r


r r r r

+
 QQs
r ` r ` r ` ` ` ` ` ` ` ` r ` r ` r


r ` ` r ` ` r ` ` r ` ` ` r ` ` r ` ` r ` ` ` r ` ` r ` ` r ` ` r

` r ` ` r ` ` r ` ` r ` ` r ` ` r ` ` r ` ` r ` ` r ` ` r `

r ` ` r ` ` r ` ` r ` ` ` r ` ` r ` ` r ` ` ` r ` ` r ` ` r ` ` r

r ` ` r ` ` r ` ` r ` ` r ` ` ` r ` ` r ` ` r ` ` r ` ` ` r ` ` r ` ` r ` ` r ` ` r

` r ` ` r ` ` r ` ` r ` ` r ` ` r ` ` r ` ` r ` ` r ` ` r ` ` r ` ` r ` ` r `

r ` ` r ` ` r ` ` r ` ` r ` ` ` r ` ` r ` ` r ` ` r ` ` ` r ` ` r ` ` r ` ` r ` ` r

r ` ` r ` ` r ` ` r ` ` r ` ` r ` ` ` r ` ` r ` ` r ` ` r ` ` r ` ` ` r ` ` r ` ` r ` ` r ` ` r ` ` r

` r ` ` r ` ` r ` ` r ` ` r ` ` r ` ` r ` ` r ` ` r ` ` r ` ` r ` ` r ` ` r ` ` r ` ` r ` ` r `

r ` ` r ` ` r ` ` r ` ` r ` ` r ` ` ` r ` ` r ` ` r ` ` r ` ` r ` ` ` r ` ` r ` ` r ` ` r ` ` r ` ` r

` r ` ` r ` ` r ` ` r ` ` r ` ` r ` ` r ` ` r ` ` r ` ` r ` ` r ` ` r ` ` r ` ` r ` ` r ` ` r `

r ` ` r ` ` r ` ` r ` ` r ` ` r ` ` ` r ` ` r ` ` r ` ` r ` ` r ` ` ` r ` ` r ` ` r ` ` r ` ` r ` ` r

` r ` ` r ` ` r ` ` r ` ` r ` ` r ` ` r ` ` r ` ` r ` ` r ` ` r ` ` r ` ` r ` ` r ` ` r ` ` r `

` r ` ` r ` ` r ` ` r ` ` r ` ` r ` ` r ` ` r ` ` r ` ` r ` ` r ` ` r ` ` r `

r ` ` r ` ` r ` ` r ` ` r ` ` ` r ` ` r ` ` r ` ` r ` ` ` r ` ` r ` ` r ` ` r ` ` r

 QQ
` r ` ` r ` ` r ` ` r ` ` r ` ` r ` ` r ` ` r ` ` r ` ` r ` ` r ` ` r ` ` r `


` r ` ` r ` ` r ` ` r ` ` r ` ` r ` ` r ` ` r ` ` r ` ` r `

 QQ
r ` ` r ` ` r ` ` r ` ` ` r ` ` r ` ` r ` ` ` r ` ` r ` ` r ` ` r

` ` ` ` r r r r ` ` ` `

+
 QQs
` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` `


` ` ` ` ` ` ` ` ` ` ` ` ` r ` ` r ` ` r ` ` ` ` ` ` ` ` ` ` ` ` `

` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` r ` ` r ` ` r ` ` r

` ` r ` ` r ` ` r ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` `

` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` r ` ` r ` ` r ` ` r ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` `

` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` r ` ` r ` ` r ` ` r ` ` r

` ` r ` ` r ` ` r ` ` r ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` `

` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` r ` ` r ` ` r ` ` r ` ` r ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` `

` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` r ` ` r ` ` r ` ` r ` ` r ` ` r

` ` r ` ` r ` ` r ` ` r ` ` r ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` `

` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` r ` ` r ` ` r ` ` r ` ` r ` ` r ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` `

` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` r ` ` r ` ` r ` ` r ` ` r ` `

r ` ` r ` ` r ` ` r ` ` r ` ` r ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` `

` ` ` ` ` ` ` ` ` ` ` ` ` r ` ` r ` ` r ` ` r ` ` r ` ` ` ` ` ` ` ` ` ` ` ` `

` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` r ` ` r ` ` r ` ` r ` `

 QQ
r ` ` r ` ` r ` ` r ` ` r ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` `

` ` ` ` ` ` ` ` ` ` r ` ` r ` ` r ` ` r ` ` ` ` ` ` ` ` ` `

 QQ
` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` r ` ` r ` ` r ` `


r r r r ` ` ` ` ` ` ` `

+
 QQs
` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` `


` ` ` ` ` ` ` ` r ` ` ` ` r ` ` ` ` ` ` ` ` ` ` ` ` ` r ` ` ` ` `

` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` `

` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` `

` ` ` ` ` r ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` r ` ` ` ` ` ` ` r ` ` ` ` ` ` ` ` r ` `

` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` `

` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` `

` ` r ` ` ` ` ` ` ` ` r ` ` ` ` ` ` ` ` ` ` r ` ` ` ` ` ` ` ` r ` ` ` ` ` ` ` ` ` ` r ` ` ` ` ` ` ` `

` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` `

` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` `

` ` ` ` ` ` r ` ` ` ` ` ` ` ` r r ` ` ` ` ` ` ` ` r ` ` ` ` ` ` ` ` ` r ` ` ` ` ` ` ` ` r ` ` `

` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` `

` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` `

r ` ` ` ` ` ` ` ` r ` ` ` ` ` ` r ` ` ` ` ` ` ` ` r ` ` ` ` ` ` r ` ` ` ` ` `

` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` `

 QQ
` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` `

` ` ` r ` ` ` ` ` ` ` ` ` ` ` ` r ` ` ` r ` ` ` ` ` ` ` ` r

 QQ
` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` `


` ` ` ` ` ` ` ` ` ` ` `

+
 QQs
` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` `


` ` r ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` `

` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` `

` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` `

` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` r ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` `

` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` `

` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` `

` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` r ` ` ` ` ` ` ` ` r ` `

` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` `

` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` `

r ` ` ` ` ` ` ` ` r ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` `

` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` `

` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` `

` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` r ` ` ` ` ` ` ` ` r ` ` ` ` ` ` ` ` ` ` ` ` `

` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` `

` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` `

` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` r ` ` `

` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` `

` ` ` ` ` ` ` ` ` ` ` `

Figure 3:

16
 6y
u `
e e u `
e e u `
e e u `
e

e u `
e e u `
e e u `
e e

u `
e e u `
e e u `
e e u `
e

e u `
e e u `
e e u `
e e -x
u `
e e u `
e e u `
e e u `
e

e u `
e e u `
e e u `
e e u
Reference point
u `
e e u `
e e u `
e e u `
e `
e
Opposite point

Figure 4:

6
CUTOFF RATE (bits/ch. use)

0
−10 −5 0 5 10 15 20 25 30
SNR (dB)

Figure 5:

17
5

4.8

4.6
CUTOFF RATE (bits/ch. use)

4.4

4.2

3.8

3.6

3.4

3.2

3
10 10.5 11 11.5 12 12.5 13 13.5 14 14.5 15
SNR (dB)

Figure 6:
8

5
C (bits/ch. use)

0
−10 −5 0 5 10 15 20 25 30
SNR (dB)

Figure 7:

18
5

4.8

4.6

4.4
C (bits/ch. use)

4.2

3.8

3.6

3.4

3.2

3
10 10.5 11 11.5 12 12.5 13 13.5 14 14.5 15
SNR (dB)

Figure 8:

19

S-ar putea să vă placă și