Documente Academic
Documente Profesional
Documente Cultură
10EC55
Assignment Questions
UNIT 1:
1. Explain the terms (i) Self information (ii) Average information (iii) Mutual
Information.
2. Discuss the reason for using logarithmic measure for measuring the amount of
information.
3. Explain the concept of amount of information associated with message. Also
explain what infinite information is and zero information.
4. A binary source emitting an independent sequence of 0s and 1s with
p and (1-p) respectively. Plot the entropy of the source.
probabilities
Dept. of ECE/SJBIT
Page 1
10EC55
11. A zero memory source has a source alphabet S= {S1, S2, S3} with P= {1/2, 1/4,
1/4}. Find the entropy of the source. Also determine the entropy of its second
extension and verify that H (S2) = 2H(S).
12. Show that the entropy is maximum when source transmits symbols with equal
probability. Plot the entropy of this source versus p (0<p<1).
13. The output of an information source consists OF 128 symbols, 16 of which occurs
with probability of 1/32 and remaining 112 occur with a probability of 1/224. The
source emits 1000 symbols/sec. assuming that the symbols are chosen independently;
find the rate of information of the source.
14. A CRT terminal is used to enter alphanumeric data into a computer, the CRT is
connected through a voice grade telephone line having usable bandwidths of 3KHz
and an output S/N of 10Db. Assume that the terminal has 128 characters and data is
sent in an independent manner with equal probability.
i. Find average information per character.
ii. Find capacity of the channel.
iii. Find maximum rate at which data can be sent from terminal to the
computer without error.
15. A message source produces two independent symbols A and B with probabilities P
i. =0.4 and P (B) =0.6. Calculate the efficiency of the source and hence
its redundancy. If the symbols are received in average with 4 in every
100 symbols in error, calculate the transmission rate of the system.
16. A message source produces two independent symbols U and V with probabilities P
(v) =0.6 and P (U) =0.4. Calculate the efficiency of the source and hence the
redundancy. If the symbols are received on average with 3 in every 100 symbols in
error & calculate the transmission rate of the system.
17. The output of a DMS consists of the possible letters x1, x2xn which occur with
probabilities p1,p2,.p n respectively. Prove that the entropy of H(x) of the source is
at least log2n.
Dept. of ECE/SJBIT
Page 2
10EC55
UNIT 2:
1.
What do you mean by source encoding? Name the functional requirements to be satisfied
in the development of an efficient source encoder.
Dept. of ECE/SJBIT
Page 3
10EC55
UNIT 3:
1. What are important properties of the codes?
2. what are the disadvantages of variable length coding?
3. Explain with examples:
4. Uniquely decodable codes, Instantaneous codes
5. Explain the Shannon-Fano coding procedure for the construction of an optimum code
6. Explain clearly the procedure for the construction of compact Huffman code.
7. A discrete source transmits six messages symbols with probabilities of 0.3, 0.2, 0.2,
0.15, 0.1, 0.05. Device suitable Fano and Huffmann codes for the messages and
determine the average length and efficiency of each code.
8. Consider the messages given by the probabilities 1/16, 1/16, 1/8, , . Calculate H. Use
the Shannon-Fano algorithm to develop a efficient code and for that code, calculate the
average number of bits/message compared with H.
9. Consider a source with 8 alphabets and respective probabilities as shown:
A
B
C
D
E
F
G
H
0.20
0.18
0.15
0.10
0.08
0.05
0.02
0.01
Construct the binary Huffman code for this. Construct the quaternary Huffman and
code and show that the efficiency of this code is worse than that of binary code
10. Define Noiseless channel and deterministic channel.
11. A source produces symbols X, Y,Z with equal probabilities at a rate of 100/sec. Owing
to noise on the channel, the probabilities of correct reception of the various symbols are
as shown:
P (j/i) X
Y
z
X
Page 4
10EC55
12. Determine the rate of transmission l(x,y) through a channel whose noise characteristics is
shown in fig. P(A1)=0.6, P(A2)=0.3, P(A3)=0.1
A
1
B
1
0.5
T
R
0.5
A2
0.5
B2
0.5
A
3
0.5
B3
13. For a discrete memory less source of entropy H(S), show that, the average code-word
length for any distortionless source encoding scheme is bounded as L H(S).
14. Briefly discuss the classification of codes.
15. Show that H(X,Y) = H(X/Y)+H(Y).
16. state the properties of mutual information.
17. Show that I(X,Y) = I(Y,X) for a discrete channel.
18. Show that for a discrete channel I(X,Y) 0
19. Determine the capacity of the channel shown in fig.
x1
y1
y2
Dept. of ECE/SJBIT
Page 5
20.
10EC55
For a binary erasure channels show in figure below Find the following:
i.
ii.
iii.
x1
p
e(y2)
21. A DMS has an alphabet of seven symbols whose probabilities of occurrences are
described here:
S0
S1
S2 S3
S4
S5 S6
Symbol:
Prob
0.25
Compute the Huffman code for this source, moving a combined symbol as high as
possible, Explain why the computed source code has an efficiency of 100%.
22. Consider a binary block code with 2n code words of same length n. Show that the Kraft
inequality is satisfied for such a code.
23. Write short notes on the following:
Binary Symmetric Channels (BSC), Binary Erasure Channels (BEC)
Uniform Channel, Cascaded channels
Dept. of ECE/SJBIT
Page 6
10EC55
UNIT 4:
1. Show that for a AWGN channel
C
in watts/Hz.
2. Consider an AWGN channel with 4 KHz bandwidth with noise power spectral density
watts/Hz. The signal power required at the receiver is 0.1mW. Calculate the capacity of the
channel.
3. If I(xi, yi) = I(xi)-I(xi/yj). Prove that I(xi,yj)=I(yj)-I(yj/xi)
4. Design a single error correcting code with a message block size of 11 and show that by
an example that it can correct single error.
5. If Ci and Cj an two code vectors in a (n,k) linear block code, show that their sum is also a
code vector.
6. Show CHT=0 for a linear block code.
7. Prove that the minimum distance of a linear block code is the smallest weight of the nonzero code vector in the code.
Dept. of ECE/SJBIT
Page 7
10EC55
UNIT 5:
1. Design a single error correcting code with a message block size of 11 and show that by
an example that it can correct single error.
2. If Ci and Cj an two code vectors in a (n,k) linear block code, show that their sum is also a
code vector.
3. Show CHT=0 for a linear block code.
4. Prove that the minimum distance of a linear block code is the smallest weight of the nonzero code vector in the code.
5. What is error control coding? Which are the functional blocks of a communication
system that accomplish this? Indicate the function of each block. What is the error
detection and correction on the performance of communication system?
6. Explain briefly the following:
Golay codes and BCH codes.
7. Explain the methods of controlling errors
8. List out the properties of linear codes.
9. Explain the importance of hamming codes & how these can be used for error detection
and correction.
10. Write a standard array for (7.4) code .
Dept. of ECE/SJBIT
Page 8
10EC55
UNIT 6:
1. Write a standard array for Systematic Cyclic Codes code
2. Explain the properties of binary cyclic codes.
3. With a neat diagrams explain the binary cyclic encoding and decoding
4. Explain how meggit decoder can be used for decoding the cyclic codes.
5. Write short notes on BCH codes
6. Draw the general block diagram of encoding circuit using (n-k) bit shift register and
explain its operation.
7. Draw the general block diagram of syndrome calculation circuit for cyclic codes and
explain its operation.
UNIT 7:
1. What are RS codes? How are they formed?
2. Write down the parameters of RS codes and explain those parameters with an example.
3. List the applications of RS codes.
4. Explain why golay code is called as perfect code.
5. Explain the concept of shortened cyclic code.
6. What are burst error controlling codes?
7. Explain clearly the interlacing technique with a suitable example.
8. What are Cyclic Redundancy Check (CRC) codes
Dept. of ECE/SJBIT
Page 9
10EC55
UNIT 8:
1. What are convolutional codes? How is it different from block codes?
2. Explain encoding of convolution codes using time doman approach with an example.
3. Explain encoding of convolution codes using transform doman approach with an
example.
4. What do you understand by state diagram of a convolution encoder? Explain clearly.
5. What do you understand by code tree of a convolution encoder? Explain clearly.
6. What do you understand by trellis diagram of a convolution encoder? Explain clearly.
Dept. of ECE/SJBIT
Page 10