Sunteți pe pagina 1din 20

IT2302- INFORMATION THEORY AND CODING

UNIT I
1. What is pr!i" #$%i&'(
Prefix coding is variable length coding algorithm. In assigns binary digits to the messages as per
their probabilities of occurrence. Prefix of the codeword means any sequence which is initial part
of the codeword. In prefix code, no codeword is the prefix of any other codeword.
2. )tat th #ha&&* #$%i&' th$r+ !$r a %is#rt ++$r,*ss #ha&&*.
Given a source of M equally liely messages, with M!!", which is generating information t a
rate #. Given channel with capacity $. %hen if,
# & $
%here exists a coding technique such that the output of the source may be transmitted over the
channel with probability of error in the received message which may be made arbitrarily small.
3. E"p*ai& #ha&&* #apa#it, th$r+.
%he channel capacity of the discrete memory less channel is given as maximum average mutual
information. %he maximi'ation is taen with respect to input probabilities P(x
i
)
$ * + log
,
("-./0) bits/sec
1ere + is channel bandwidth.
-. D!i& #ha&&* #apa#it, $! th %is#rt ++$r,*ss #ha&&*.
%he channel capacity of the discrete memoryless channel is given as maximum average mutual
information. %he maximi'ation is taen with respect to input probabilities
$ * max I(234)
P(x
i
)
.. D!i& +/t/a* i&!$r+ati$&.
%he mutual information is defined as the amount of information transferred when x
i
is transmitted
and y
i
is received.it is represented by I (x
i
,y
i
) and given
as, I (x
i
,y
i
) * log P (x
i
/y
i
) bits
P(x
i
)
0. stat its t1$ pr$prtis $! +/t/a* i&!$r+ati$&

%he mutual information is symmetric.


I(254) * I(234)

%he mutual information is always positive


I(254) 6 7
2. D!i& !!i#i&#, $! th s$/r# &#$%r.
8fficiency of the source encoder is given as,
9 * 8ntropy (1) .
:vg. no. of bits in codeword(0)
3. D!i& #$% r%/&%a&#,.
It is the measure of redundancy of bits in the encoded message sequence. It is given as,
#edundancy * " ; code efficiency
* " ;
9 It should be as low as possible.
4. D!i& rat $! i&!$r+ati$& tra&s+issi$& a#r$ss th #ha&&*.
#ate of information transmission across the channel is given as,
<t * =1(2) ; 1(2/4)>r bits/sec
1ere 1(2) is the entropy of the source.
1(2/4) is the conditional entropy.
10. D!i& 5a&%1i%th !!i#i&#,.
%he ratio of channel capacity to bandwidth is called bandwidth efficiency
+andwidth efficiency * channel capacity ($)
+andwidth (+)
11. What is th #apa#it, $! th #ha&&* ha6i&' i&!i&it 5a&%1i%th(
%he capacity of such channel is given as,
$ * ".?? (./0
7
)
1ere ./0
7
is signal to noise ratio
12. D!i& a %is#rt ++$r,*ss #ha&&*.
@or the discrete memoryless channels, input and output, both are discrete random variables.
%he current output depends only upon current input for such channel.
13. Fi&% &tr$p, $! a s$/r# +itti&' s,+5$*s "7 ,7 8 1ith pr$5a5i*itis $! 19.7 1927 193
rsp#ti6*,.
p" * "/A, p, * "/,, pB * "/B.
1

log
,
("/p

)
= "/A log
,
A - "/, log
,
, -"/B log
,
B
= ".?CD bits/symbol
1-. A& a*pha5t st #$&tai&s 3 *ttrs A7:7 C tra&s+itt% 1ith pr$5a5i*itis $! 1937 ;7 19-.
Fi&% &tr$p,.
p" * "/B, p, * "/?, pB * "/?.
1

log
,
("/p

)
= "/B log
,B
- "/? log
,
? -"/? log
,
?
= ".A,EB, bits/symbol
1.. D!i& i&!$r+ati$&
:mount of information 3 I

* log
,
("/p

)
10. Writ th pr$prtis $! i&!$r+ati$&
If there is more uncertainty about the message, information carried is also more.
If receiver nows the message being transmitted, the amount of information carried is 'ero.

If I
"
is the information carried by message m
"
, and I
,
is the information carried by m
,
, then amount
of information carried compontely due to m
"
and m
,
is I
"
-I
,

12. Ca*#/*at th a+$/&t $! i&!$r+ati$& i! p
<
= ;
:mount of information 3 I

* log
,
("/p

)
= log
"7
?
Fog
"7
,
= , bits
13. What is &tr$p,(
:verage information is represented by entropy. It is represented by 1.
1

log
,
("/p

)
14. >r$prtis $! &tr$p,?

8ntropy is 'ero if the event is sure or it is impossible


1 * 7 if p

* 7 or "

Ghen p

* "/M for all the M symbols, then the symbols are equally liely for such source
entropy is given as 1 * log
,
M

Hpper bound on entropy is given as,


1max * log
,
M
20. D!i& #$% 6aria&#
Iariance is the measure of variability in codeword lengths. It should be as small as possible.

(n

J 0 )
1ere variance code
p

; probability of Kth symbol


n

; no. of bits assigned to Kth


symbol 0 ; avg. codeword length
UNIT-II
1. D!i& N,@/ist rat.
Fet the signal be band limited to G 1'. %hen 0yquist rate is given as,
0yquist rate * ,G samples/sec
:liasing will not tae place if sampling rate is greater than 0yquist rate.
2. What is +a&t 5, a*iasi&' !!#t(
:liasing effect taes place when sampling frequency is less than nyquist rate. Hnder such
condition, the spectrum of the sampled signal overlaps with itself. 1ence higher frequencies tae
the form of lower frequencies. %his interference of the frequency components is called aliasing
effect.
3. What is >WM r!rs.
PGM is basically pulse width modulation. Gidth of the pulse changes according to amplitude of
the modulating signal, it is also referred as pulse duration modulation or P<M.
-. )tat sa+p*i&' th$r+.
.ampling theorem states that, a band limited signal of finite energy, which has no frequency
components higher than G 1', is completely described by specifying the values of the signal at
instants of time separated by "/,G seconds and ,
: band limited signal of finite energy, which has no frequency components higher than G 1', may
be completely recovered from the nowledge of its samples taen at the rate of ,G samples per
second.
.. M&ti$& t1$ +rits $! D>CM.
+andwidth requirement of <P$M is less compared to P$M.
Luanti'ation error is reduced because of perdition filter.
0. What is th +ai& %i!!r&# i& D>CM a&% DM(
<M encodes the input sample by only one bit. It sends the information about -M or JM, ie. .tep rise
or fall. <P$M can have more than one bit for encoding the sample, it sends the information about
difference between actual sample value and predicted sample value.
2. H$1 th +ssa' #a& 5 r#$6r% !r$+ >AM(
%he message can be recovered from P:M by passing the P:M signal through reconstruction filter.
%he reconstruction filter integrates amplitudes of P:M pulses. :mplitude smoothing of the
reconstructed signal is done to remove amplitude discontinuities due to pulses.
8. Writ a& "prssi&' !$r 5a&%1i%th $! 5i&ar, >CM 1ith N +ssa' a#h 1ith a +a"i+/+
!r@/&#, $! !
+
H8.
If v number of bits are used to code each input sample, then bandwidth of P$M is given as,
+
%
6 0. v . f
m

1ere v. f
m
is the bandwidth required by one message.
4. H$1 is >DM 1a6 #$&6rt% i&t$ >>M s,st+s(
%he P<M signal is given as a cloc signal to monostable multivibrator. %he multivibrator triggers
on falling edge. 1ence a PPM pulse of fixed width is produced after falling edge of P<M pulse.
P<M represents the input signal amplitude in the form of width of the pulse. : PPM pulse is
produced after this width of P<M pulse. In other words, the position of the PPM pulse depends
upon input signal amplitude.
10. M&ti$& th /s $! a%apti6 @/a&ti8r i& a%apti6 %i'ita* 1a6!$r+ #$%i&' s#h+s.
:daptive quenati'er changes its step si'e accordion to variance of the input signal. 1ence
quanti'ation error is significantly reduces due to adaptive quanti'ation. :<P$M uses adaptive
quanti'ation. %he bit rate of such schemes is reduces due to adaptive quanti'ation.
11. What %$ ,$/ /&%rsta&% !r$+ a%apti6 #$%i&'(
In adaptive coding, the quanti'ation step si'e and prediction filter coefficients are changed as per
properties of input signal. %his reduces the quanti'ation error and number of bits used to represent
the sample value. :daptive coding is used for speech coding at low bit rates.
12. What is +a&t 5, @/a&ti8ati$&(
Ghile converting the signal value from analog to digital, quanti'ation is performed. %he analog
value is assigned to the nearest digital level. %his is called quanti'ation. %he quanti'ed value is
then converted to equivalent binary value. %he quanti'ation levels are fixed depending upon the
number of bits. Luanti'ation is performed in every analog to digital conversion.
13. Th si'&a* t$ @/a&ti8ati$& &$is rati$ i& a >CM s,st+ %p&%s $& AA
%he signal to quanti'ation noise ratio in P$M is given as,
(./0)
d+
& (?.E-Nv) d+
1ere v is the number of bits used to represent samples in P$M. 1ence signal to quanti'ation noise
ration in P$M depends upon number of bits or quanti'ation levels.
1-. F$r th tra&s+issi$& $! &$r+a* sp#h si'&a* i& th >CM #ha&&* &%s th :.W. $!A..
.peech signals have the maximum frequency of B.? 1'. 0ormally E bits P$M is used for speed.
%he transmission bandwidth of P$M is given as,
+
%
6 vG
E x B.? 1'
,D., 1'.
1.. It is r@/ir% t$ tra&s+it sp#h $6r >CM #ha&&* 1ith 3-5it a##/ra#,. Ass/+ th sp#h
i& 5as5a&% *i+it% t$ 3.0 <H8. Dtr+i& th 5it rat(
%he signaling rate in P$M is given as,
#* v f
s
1ere v number of bits ie. E
%he maximum signal frequency is G* B.N 1'. 1ence minimum sampling frequency will
be, f
s
* ,G
= , xB.N 1'
= D., 1'.
# * ExD.,x"7
B
= AD.N bits/sec
10. What is +a&t 5, a%apti6 %*ta +$%/*ati$&(
In adaptive delta modulation, the step si'e is adOusted as per the slope of the input signal. .tep si'e
is made high if slope of the input signal is high. %his avoids slope overload distortion.
12. D*ta +$%/*ati$& $! %*ta +$%/*ati$& $6r p/*s +$%/*ati$& s#h+s(
<elta modulation encodes one bit per sample. 1ence signaling rate is reduced in <M.
13. What sh$/*% 5 th +i&i+/+ 5a&%1i%th r@/ir% t$ tra&s+it a >CM #ha&&*(
%he minimum transmission bandwidth in P$M is given as,
+
%
* vG
1ere v is number of bits used to represent on pulse.
G is the maximum signal frequency.
14. What is th a%6a&ta' $! %*ta +$%/*ati$& $6r >CM(
<elta modulation uses one bit to encode one bit to encode one sample. 1ence bit rate of delta
modulation is low compared to P$M.
20. H$1 %ist$rti$&s ar $6r#$+ i& ABM(

%he slope overload and granular noise occur mainly because of fixed step si'e in delta modulator.

.tep si'e is more for fast amplitude changes and step si'e is less for slowly varying amplitude.

%he step si'e is varied according to amplitude variations of input signal


UNIT III Err$r C$&tr$* C$%i&'
1. What is ha++i&' %ista&#(
%he hamming distance between the two code vectors is equal to the number of elements in which
they differ. @or example, let the two code words be,
2 * ("7") and 4 * (""7)
%hese two code words differ in second and third bits. %herefore the banning distance between 2
and 4 is two.
2. D!i& #$% !!i#i&#,(
%he code efficiency is the ratio of message bits in a bloc to the transmitted bits for that bloc by
the encoder ie.
$ode efficiency * message bits *
%ransmitted bits n
3. What is +a&t 5, s,st+ati# a&% &$& s,st+ati# #$%s(
In a systematic bloc code, message bits appear first and then chec bits. In the nonsystematic
code, message and chec bits cannot be identified in the code vector.
-. What is +a&t 5, *i&ar #$%(
: code is linear if moduloJ, sum of any two code vectors produces another code vector. %his
means any code vector can be expressed as linear combination of other code vectors.
5. What ar th rr$r %t#ti$& a&% #$rr#ti$& #apa5i*itis $! Ha++i&' #$%s(
%he minimum distance (d
min
) of 1amming codes is B . 1ence it can be used to detect double
errors or correct single errors. 1amming codes are basically linear bloc codes with d
min
* B
0. What is +a&t 5, #,#*i# #$%(
$yclic codes are the subclass of linear bloc codes. %hey have the properly that a cyclic shift of
one codeword produces another code word. @or example consider the codeword.
2 * (x
nJ"
,x
nJ,
,PPx
"
,x
7
)
Fet us shift above codevector to left cyclically,
2 * (x
nJ,
,x
nJB
,PP x
7,
x
"
,x
nJ"
)
:bove codevector is also a valid codevector.
2. H$1 s,&%r$+ is #a*#/*at% i& Ha++i&' #$%s a&% #,#*i# #$%s(
In 1amming codes the syndrome is calculated as,
. * 41
%

1ere 4 is the received and 1
%
is the transpose of parity chec
matrix. In cyclic code, the syndrome vector polynomial is given as.
.(p) * rem =4(p)/G(p)>
1ere 4(p) is received vector polynomial and G(p) is generator polynomial.
3. What is :CH #$%(
+$1 codes are most extensive and powerful error correcting cyclic codes. %he decoding of +$1
codes is comparatively simples. @or any positive integer m and t , there exists a +$1 code
with
following parameters3
+loc length 3 n * ,
mJ"

0umber of parity chec bits 3 nJ & mt
Minimum distance 3 d
min
6 ,t - "
4. What is R) #$%(
%hese are non binary +$1 codes. %he encoder for #. codes operates on multiple bits
simultaneously. %he (n,) #. code taes the groups of mJbit symbols of the incoming binary data
stream. It taes such number of symbols in one bloc. %hen the encoder adds (nJ) redundant
symbols to form the code word of n symbols.
#. code has 3
+loc length 3 n * ,
mJ"
Message si'e 3 symbols
0umber of parity chec bits 3 nJ* ,t
Minimum distance 3 d
min
* ,t - "
10. What is th %i!!r&# 5t1& 5*$#< #$%s a&% #$&6$*/ti$&a* #$%s(
+loc codes tae number of message bit simultaneously and form n bit code vector is also
called bloc. $onvolutional code taes one message bit at a time and generates
two or more encoded bits. %hus convoutional codes generate a string.
11. D!i& #$&strai&t *&'th i& #$&6$*/ti$&a* #$%s.
$onstraint length is the number of shifts over which the single message bit can influence the
encoder output. It is expressed in terms of message bits.
12. D!i& !r %ista&# a&% #$%i&' 'ai&.
@ree distance is the minimum distance between code vectors. It is also equal to minimum weight of
the code vectors.
$oding gain is used as a basis of comparison for different coding methods. %o achieve the same bit
error rate the coding gain is defined as,
: * =8
b
/0
o
> encoded
=8
b
/0
o
> coded
13. Wh, #,#*i# #$%s ar "tr+*, 1** s/it% !$r rr$r %t#ti$&(

%hey are easy to encode

%hey have well defined mathematical structure. %herefore efficient decoding schemes are available.
1-. What is s,&%r$+(
.yndrome gives an indication of errors present in received vector 4 . If 41
%
* 7, then there are
no errors in 4 and it is valid code vector. %he non 'ero value of 41
%
is called syndrome . Its
non 'ero value indicates that 4 is not a valid code vector and it contains errors.
1.. D!i& %/a* #$%.
Fet there be (n,) bloc code. It satisfies 1G
%
* 7. %hen the (n,nJ) i.e. (n,q) bloc code is called
dual code. @or every (n,) bloc code, there exists a dual code of si'e (n,q).
10. Writ s,&%r$+ pr$prtis $! *i&r 5*$#< #$%s.

.yndrome is obtained by . * 41
%
.

If 4 * 2, then . * 7 ie. 0o error in output.

If 4 Q 2, then . Q 7 ie. %here is an error in output.

.yndrome depends upon the error pattern only, ie. . * 81


%

12. What is Ha++i&' #$%( Writ its #$&%iti$&s
1amming codes are (n,) liner bloc codes with following conditions3

0umber of chec bits q 6 B

+loc length n * ,
q
; "

0umber of message bits * n ; q

Minimum distance dmin * B


13. Cist th pr$prtis $! '&rat$r p$*,&$+ia* $! #,#*i# #$%s.

Generator polynomial is a factor x(p) and (p


n
-")

$ode polynomial, message polynomial and generator polynomial are related by, 2(p) * M(p) G(p)

Generator polynomial is of degree q


14. What is ha%a+ar% #$%
%he hadamard code is derived form hadamard matrix. %he hadamard matrix is the n x n square
matrix. #ows of this hadamard matrix represent code vectors. %hus a n x n hadamard matrix,
represents n codes vector of n bits each. If the boloc of message vector contains bits,
then
0 * ,

20. Writ th a%6a&ta' $! "t&%% #$%


%his code can detect more number of errors compared to normal (n,) bloc code. +ut it cannot
used in error correction.
UNIT ID
1. )tat th +ai& app*i#ati$& $! Graphi#s I&tr#ha&' F$r+atEGIFF
%he GI@ format is used mainly with internet to represent and compress graphical images. GI@
images can be transmitted and stored over the networ in interlaced mode, this is very useful when
images are transmitted over low bit rate channels.
2. E"p*ai& R/&*&'th &#$%i&'.
%he runlength encoding is siplest lossless encoding techniques. It is mainly used to compress text
or digiti'ed documents. +inary data strings are better compressed by runlength encoding. $onsider
the binary data string
""""""77777"""""PPP.
If we apply runlength coding to abouve data string, we get,
D,"5 N,75 A,"5 B,7PP
%hus there are seven binary " s, followed by six binary 7 s followed by five binary " s and so on.
3. What is G>EG sta&%ar%(
RP8G stands for Ooint photographic exports group. %his has developed a standard for compression
of monochrome/color still photographs and images. %his compression standard is nown as RP8G
standard. It is also now as I.S standard "7C"E. It provides the compression ratios upto "773".
-. Wh, %i!!r&tia* &#$%i&' is #arri% $/t $&*, !$r DC #$!!i#i&t i& G>EG(
%he <$ coefficient represents average color/luminance/ chrominance in the corresponding bloc. %herefore it
is the largest coefficient in the bloc.
Iery small physical area is covered by each bloc. 1ence the <$ suitable compressions for <$ coefficients
do not vary much from one bloc to next bloc.
%he <$ coefficient vary slowly. 1ence differential encoding is best suitable compression for
<$ coefficients. It encodes the difference between each pair of values rather than their absolute
values.
.. What %$ ,$/ /&%rsta&% 5, HGIF i&tr*a#% &$%I.
%he image data can be stored and transferred over the networ in an interlaced mode. %he data is
stored in such a way that the decompressed image is built up in progressive way.
0. E"p*ai& i& 5ri! Hspatia* !r@/&#,I 1ith th ai% $! a %ia'ra+.
%he rate of change of pixel magnitude along the scanning line is called spatial frequency.
2. Writ th a%6a&ta's $! Data #$+prssi$&

1uge amount of data is generated in text, images, audio, speech and video.

+ecause of compression, transmission data rate is reduced.

.torage becomes less due to compression. <ue to video compression, it is possible to store one complete
movie on two $<s.
%ransportation of the data is easier due to compression.
3. Writ th %ra15a#<s $! Data #$+prssi$&
<ue to compression, some of the data is lost.
$ompression and decompression increases complexity of the transmitter and receiver.
$oding time is increased due to compression and decompression.
4. C$+par *$ss*ss a&% *$ss, #$+prssi$&
..0o. Fossless compression Fossy compression
". 0o information is lost .ome information is lost
,. $ompletely reversible It is not reversible
B. Hsed for text and data Hsed for speed and video
?. $ompression ratio is less 1igh compression ratio
A. $ompression is independent $ompression depends upon
of human response sensitivity of human ear, eyesP
10. C$+par stati# #$%i&' a&% %,&a+i# #$%i&'
..0o. .tatic coding <ynamic coding
". $odewords are fixed $odewords change dynamically
throughout compression during compression
,. .tatistical characteristics of .tatistical characteristics of the
the data are nown data are not nown
B. #eceiver nows the set of #eceiver dynamically calculates
codewords the codewords
?. 8x3 .tatic 1uffman coding 8x3 <ynamic 1uffman coding
11. Writ th pri&#ip* $! stati# H/!!+a& #$%i&'
In static 1uffman coding, the character string to be transmitted is analy'ed. %he frequency of
occurrence of each character is determined. %he variable length codewords are then assigned to
each character. %he coding operation creates an unbalanced tree. It is also called 1uffman coding
tree.
12. H$1 arith+ti# #$%i&' is a%6a&ta's $6r H/!!+a& #$%i&' !$r t"t #$+prssi$&(
..0o. arithmetic coding 1uffman coding
". $odes for the characters are $oding is done for messages of
derived. short lengths
,. .hannon s rate is achieved only .hannon s rate is always
if character probabilities are all achieved irrespective of
integer powers of "/, probabilities of characters
B. Precision of the computer does Precision of the computer
not affect coding determine length of the
character string that can by
encoded
?. 1uffman coding is the simple :rithmetic coding is
technique complicated
13. D!i& #$+prssi$&
Farge amount of data is generated in the form of text, images, audio, speech and video.
1-. What is th pri&#ip* $! %ata #$+prssi$&
$ompression <ecompression
Information .ource
0etwor
<estination #eceiver
source
encoder decoder
1.. What ar th t,ps $! #$+prssi$&
%he compression can be of two types3
Fossless compression and lossy compression
C$ss*ss #$+prssi$& ? In compression, no part of the original information is lost during
compression. <ecompression produces original information exactly
C$ss, #$+prssi$&? In lossy compression some information is lost duration compression. 1ence
decompression does not produce original information exactly.
10. What ar J+a<-/p #$%s a&% tr+i&ati$& #$%s i& %i'iti8ati$& $! %$#/+&ts(
MaeJup codes and termination codes gives codeword for contiguous white and blac pels along
the scanned line.
Tr+i&ati$& #$%? there codes give codewords for blac and white runlengths from 7 to NB in
steps of " pel.
Ma<-/p #$%s? these codes give codewords for blac and white runlengths that are multiples of
N? pels.
12. What ar G>EG sta&%ar%s.
%he RP8G stand for Roints Photographic 8xports Group(RP8G). %his group is woring on
international compression standard for colour and monochrome continuous tone still images,
photographs etc. this group came up with a compression standard, which is widely nown as RP8G
standard. It is also nown as I.S standard "7C"E.
13. What ar th t,ps $! G>EG a*'$rith+s
%here are two types of RP8G algorithms.
:as*i& G>EG? <uring decoding, this algorithm draws line until complete image is shown.
>r$'rssi6 G>EG? <uring decoding, this RP8G algorithm draws the whole image at once, but in
very poor quality. %hen another layer of data is added over the previous image to improve its
quality. Progressive RP8G is used for images on the web.the used can mae out the image before it
is fully downloaded.
14. Dra1 th 5*$#< %ia'ra+ $! G>EG &#$%r
+loc and Entropy
.ource
image
DCT quantization encoding
image
preparation
Encoded
image Frame
data(JPEG) building
20. What t,p $! &#$%i&' t#h&i@/s is app*i% t$ AC a&% DC #$- !!i#i&t i& G>EG(

%he <$ coefficients have normally large amplitudes. %hey vary slowly from bloc to bloc.
<ifferential encoding becomes very efficient for such data. It encodes only the difference
among the coefficients.

%he :$ coefficients are remaining NB coefficients in each bloc. %hey are fast varying. 1ence runlength
encoding proves to be efficient for such data.
UNIT D
1. What is %$*5, AC-1(
<olby :$J" is used for audio coding. It is MP8G audio coding standard. It uses psychoacoustic
model at the encoder and has fixed bit allocations to each subband.
2. What is th &% $! MIDI sta&%ar%(
%he MI<I stands for musical instrument digital interface(MI<I). It normally specifies the details
of digital interface of various musical instruments to microJcomputer. It is essential to access,
record or store the music generated from musical instruments.
3. What is pr#pt/a* #$%i&'(
In perceptual coding only perceptual feature of the sound are stored. %his gives high degree of
compression. 1uman ear is not sensitive to all frequencies equally. .imilarly masing of weaer
signal taes place when louder signal to present nearby. %hese parameters are used in perceptual
coding.
-. E"p*ai& CEC> pri&#ip*s.
o
$8FP uses more sophisticated model of vocal tract.
o
.tandard audio segments are stored as waveform templates. 8ncoder and decoder both have
same set of templates. It is called codeboo.
o
8very digiti'ed segment is compared with waveform templates in code boo.
o
%he matching template is differentially encoded and transmitted.
:t the receiver, the differentially encoded codeword selects the matching template from codeboo.
.. What is si'&i!i#a&# $! D- !ra+s i& 6i%$ #$%i&'

%he < ; frames are inserted at regular intervals in the encoded sequence of frames. %hese are highly
compressed and they are ignored during decoding p and + frames.

%he <J frames consists of only <$ coefficients and hence they generate low resolution picture.

%he low resolution pictures generated by < ; frames are useful in fast forward and rewind applications.
0. D!i& th tr+s HGO>I a&% H>r%i#ti$& spa&I 1ith r!r&# t$ 6i%$ #$+prssi$&.
GO> EGr$/p $! >i#t/rF? the number of fames or pictures between two successive IJframes is
called group of picture or GSP. %ypical value of GSP varies from B to ",.
>r%i#ti$& )pa&? %he number of frames between a PJframe and the immediately preceding I or P
frame is called prediction span. %he typical value of prediction span lies from " to B.
2. D!i& th tr+s Hpr$#ssi&' %*a,I a&% Ha*'$rith+i# %*a,I 1ith rsp#t t$ sp#h #$%rs.
Processing delay3 It is the combined time required for (i) analy'ing each bloc of digitali'ed
samples at encoder and (ii) reconstruction of speech at decoder.
:lgorithmic delay3 It is the time required to accumulated each bloc of samples in the memory.
3. What %$ ,$/ /&%rsta&% 5, !r@/&#, +as<i&'(
%he strong signal reduces level of sensitivity of human ear to other signals which are near to it in
frequency. %his effect is called frequency masing.
4. Fi&% th a6ra' #$+prssi$& rati$ $! th GO> 1hi#h has a !ra+ s@/&#
I::>::>::>:: 1hr th i&%i6i%/a* #$+prssi$& rati$s $! I7 > a&% : ar 10? 17 20? 17 .0? 1
rsp#ti6*,.
%here are total ", frames of which IJframes are ", PJframes are B and +Jframes are E. 1ence
average compression ratio will be,
:vg $# * ("x("/"7)) - (Bx("/,7)) - (Ex("/A7))
",
= 7.7B?,
10. What is pr#pt/a* #$%i&'(
In perceptual coding, the limitations of human ear are exploited. Ge now that human ear can
listen very small sound when there is complete silence. +ut if other big sounds are present, then
human ear can not listen very small sounds. %hese characteristics of human ear are used in
perceptual coding. %he strong signal reduces level of sensitivity of the ear to other signals which
are near to it in frequency. %his effect is called frequency masing.
11. What is #$% "#it% C>C(
%he code excited FP$ uses more sophisticated model of a vocal tract. %herefor the generated
sound is more nature. %he sophisticated version of vocal ract is nown as code excited linear
prediction ($8FP) model.
12. D!i& pit#h a&% pri$%.
>it#h? %he pitch of the signal gives an information about fundamental frequency. Pitch of every
person is different. 1owever it is in the similar range for males and some another rage for females.
>ri$%? %his is the time duration of the signal. It is also one of the important feature.
13. Cist th app*i#ati$& $! C>C.
.ince the generated sound is very synthetic, it is used mainly for military purposes.
FP$ synthesis is used in applications which require very small bandwidth.
1-. Cist th !$/r i&tr&ati$&a* sta&%ar%s 5as% $& CEC>.
%hey are I%HJ% recommendations G.D,E, G.D,C, G.D,C(:) and G.D,B."
1.. What is +a&t 5, t+p$ra* +as<i&'(
Ghen ear hears the loud sound, certain time has to be passed before it hears quieter sound. %his is
called temporal masing.
10. What is M>EG(
MP8G stands for Motion Pictures 8xpert Group(MP8G). It was formed by I.S. MP8G has
developed the standards for compression of video with audio. MP8G audio coders are used for
compression of audio. %his compression mainly uses perceptual coding.
12. Dra1 th !ra+ !$r+at i& M>EG a/%i$ &#$%r.
:udio
P$M
signal
8ncoder
.ignal to mas
Masing
ratio and bit
+it allocations
thresholds
allocations
Luanti'er
"
:nalysis
Luanti'er
@rame
8ncode
, dMP8G
filter band conversion
audio
s
Luanti'er
B
13. Writ th a%6a&ta's a&%
app*i#ati$&s $! D$*5, AC-1.
A%6a&ta's?

.imple encoding scheme du to fixed bit allocations

#educed compressed bit rate since frames does not include bit allocations. %he typical
compressed bit rate is A", bps for two channel stereo signal.
App*i#ati$&s?

It is used in satellites for @M radio.

It is also used for compression of sound associated with %I programs.


14. Writ th a%6a&ta's a&%
%isa%6a&ta's $! D$*5, AC-2
A%6a&ta's?

1it allocations are not transmitted in the frame.

+it rate of encoded audio is higher than MP8G audio coding.


Disa%6a&ta's?

$omplexity is more since psychoacoustic model and spectral envelope


encoder/decoders and used.

.ubband samples are encoded and transmitted in the frame. 1ence bit rate of
compressed data is slightly reduced.

It cannot be used for broadcast applications since encoder and decoder both contain
psychoacoustic model. %herefore encoder cannot be modifier easily.
20. D!i& I7> a&% : !ra+s.
I-!ra+s? It is also nown as intracoded frame. It is normally the first frame in
the new scene. >-!ra+s? It is also nown as predictive frame. It basically
predicts the movement of obOects with respect to IJframe.
:-!ra+? It is also nown as bidirectional frame. %hese frames relate the
motion of the obOects in preceding as well as succeeding frames.
>ART- : E10 Mar<sF
UNIT I
". (i) 1ow will you calculate channel capacityT (,)
(ii)Grite channel coding theorem and channel capacity theorem (A)
(iii)$alculate the entropy for the given sample data :::+++$$< (B)
(iv)Prove .hannon information capacity theorem (N)
,. (i)Hse differential entropy to compare the randomness of random
variables
(iii)Grite the entropy for a binary
symmetric source (?) (iv)Grite down the
channel capacity for a binary channel (?)
B. : discrete memory less source has an alphabet of five symbols whose
probabilities of occurrence are as described here
.ymbols3
2" 2, 2B
2? 2A
Probability
3 7., 7.,
7." 7." 7.?
$ompare the 1uffman code for this source .:lso calculates the efficiency of
the source encoder (E)
(b) : voice grade channel of telephone networ has a bandwidth of B.? 1'
$alculate
(i) %he information capacity of the telephone channel for a signal to noise ratio
of B7 d+ and
(ii) %he min signal to noise ratio required to support information transmission
through the telephone channel at the rate of C.NKb/s (E)
?. Prob 3 7.,A 7.,A 7.7N,A 7.7N,A 7.",A 7.",A 7.",A
(i) $ompute the 1uffman code for this source moving a combined symbols as
high as
possible ("7)
(ii) $alculate the coding efficiency (?)
(iii) Ghy the computed source has a efficiency of "77U (,)
A. (i) $onsider the following binary sequences """7"77""777"7"""7"77.Hse the
Fempel ; Viv algorithm to encode this sequence. :ssume that the binary
symbols " and 7 are already in the code boo (",)
(ii)Ghat are the advantages of Fempel ; Viv encoding algorithm over 1uffman
codingT (?)
N. : discrete memory less source has an alphabet of five
symbols with their probabilities for its output as given here
=2> * =x" x, xB x? xA >
P=2> * =7.?A 7."A 7."A 7."7 7."A>
$ompute two different 1uffman codes for this source .for these two codes .@ind
(i) :verage code word length
(ii) Iariance of the average code word length over the ensemble of source
symbols ("N)
D. : discrete memory less source 2 has five symbols x",x,,xB,x? and xA
with probabilities p(x") ; 7.?, p(x,) * 7."C, p(xB) * 7."N, p(x?) * 7."A
and p(xA) * 7."
(i) $onstruct a .hannon ; @ano code for 2,and $alculate
the efficiency of the code (D)
(ii) #epeat for the 1uffman code and $ompare the results (C)
E. $onsider that two sources ." and ., emit message x", x,, xB and y",
y,,yB with Ooint probability P(2,4) as shown in the matrix form.
B/?7 "/?7 "/?7
P(2, 4) "/,7 B/,7 "/,7
"/E "/E B/E
$alculate the entropies 1(2), 1(4), 1(2/4), and 1 (4/2) ("N)
C. :pply 1uffman coding procedure to following massage ensemble and
determine :verage length of encoded message also. <etermine the coding
efficiency. Hse coding alphabet <*?.there are "7 symbols.
2 * =x", x,, xBPPx"7>
P=2> * =7."E,.7."D,7."N,7."A,7.",7.7E,7.7A, 7.7A,7.7?,7.,> ("N)
UNIT II
". (i)$ompare and contrast <P$M and :<P$M (N)
(ii) <efine pitch, period and loudness (N)
(iii) Ghat is decibelT (,)
(iv) Ghat is the purpose of <@%T (,)
,. (i)8xplain delta modulation with examples (N)
(ii) 8xplain subJband adaptive differential pulse code modulation (N)
(iii) Ghat will happen if speech is coded at low bit rates (?)
B. Gith the bloc diagram explain <P$M system. $ompare <P$M with
P$M W <M systems ("N)
?. (i). 8xplain <M systems with bloc diagram (E)
(ii) $onsider a sine wave of frequency fm and amplitude :m, which is applied
to a
delta modulator of step si'e ..how that the slope overload distortion will occur
Information $oding
%echniques if :m ! /
( ,fm%s)
Ghere %s sampling. Ghat is the maximum power that may be
transmitted without slope overload distortionT (E)
A. 8xplain adaptive quanti'ation and prediction with bacward estimation in
:<P$M system with bloc diagram ("N)
N. (i) 8xplain delta modulation systems with bloc diagrams (E)
(ii) Ghat is slope ; overload distortion and granular noise and how it is
overcome in adaptive delta modulation (E)
D. Ghat is modulationT 8xplain how the adaptive delta modulator wors
with different algorithmsT $ompare delta modulation with adaptive
delta modulation ("N)
E. 8xplain pulse code modulation and differential pulse code modulation ("N)
UNIT III
". $onsider a hamming code $ which is determined by the parity chec
matrix " " 7 " " 7 7
1 * " 7 " " 7 "
7 7 " " "
7 7 "
(i) .how that the two vectors $"* (77"77"") and $, * (777"""") are code
words of $ and $alculate the hamming distance between them (?)
(ii) :ssume that a code word $ was transmitted and that a vector r * c - e is
received. .how that the syndrome . * r. 1 % only depends on error vector e. (?)
(iii) $alculate the syndromes for all possible error vectors e with 1amming
weight X*" and list them in a table. 1ow can this be used to correct a single
bit error in an arbitrary positionT (?)
(iii) Ghat is the length and the dimension K of the codeT Ghy can the
min 1amming distance dmin not be larger than threeT (?)
,. (i) <efine linear bloc codes (,)
(ii)1ow to find the parity chec matrixT
(?) (iii)Give the syndrome decoding
algorithm (?)
(iv)<esign a linear bloc code with dmin 6 B for some bloc length n* ,mJ" (N)
B. a. $onsider the generator of a (D,?) cyclic code by generator
polynomial g(x) ; "-x-xB.$alculate the code word for the
message sequence "77" and $onstruct systematic generator
matrix G. (E)
b. <raw the diagram of encoder and syndrome calculator
generated by polynomial g(x)T (E)
?. @or the convolution encoder shown below encode the message
sequence ("77""). also prepare the code tree for this encoder ("N)
Path "
-
Msg +its
@@ @@
Sutput
-
Path "
A. (i)@ind a (D,?) cyclic code to encode the message sequence ("7""") using
generator matrix g(x) * "-x-xB (E)
(ii) $alculate the systematic generator matrix to the polynomial g(x) *
"-x-xB.:lso draw the encoder diagram (E)
N. Ierify whether g(x) * "-x-x,-xB-x? is a valid generator polynomial for
generating a cyclic code for message ="""> ("N)
D. : convolution encoder is defined by the following generator
polynomials3 g7(x) * "-x-x,-xB-x?
g"(x) *
"-x-xB
-x?
g,(x) *
"-x,-x?
(i) Ghat is the constraint length of this codeT (?)
(ii) 1ow many states are in the trellis diagram of this code ( E)
(iii) Ghat is the code rate of this codeT (?)
E. $onstruct a convolution encoder for the following specifications3 rate
efficiency *"/, $onstraint length *?.the connections from the shift to
modulo , adders are described by following equations
g"(x) * "-x g,(x) * x
<etermine the output codeword for the input message ="""7> ("N)
UNIT ID
". (i)<iscuss the various stages in RP8G standard (C)
(ii)<ifferentiate loss less and lossy compression technique and give one
example for each (?)
(iii).tate the prefix property of 1uffman code (B)
,. Grite the following symbols and probabilities of occurrence, encode the
Message YwentZ[ using arithmetic coding algorithms. $ompare arithmetic
coding with 1uffman coding principles ("N)
.ymbols3 e n t w Z
Prob 3 7.B 7.B 7., 7." 7."
B. (a)<raw the RP8G encoder schematic and explain ("7)
(b) :ssuming a quanti'ation threshold value of "N, derive the
resulting quanti'ation error for each of the following <$%
coefficients
",D, D,, N?, AN,JAN,JN?,JD,,J",E. (N)
?. (i)8xplain arithmetic coding with suitable example (",)
(ii) $ompare arithmetic coding algorithm with 1uffman coding (?).
A. (i)<raw RP8G encoder bloc diagram and explain each bloc ("?)
(ii) Ghy <$ and :$ coefficients are encoded separately in RP8G (,)
N. (a)<iscuss in brief ,the principles of compression (",)
(b) in the context of compression for %ext ,Image ,audio and Iideo which of
the compression techniques discussed above are suitable and GhyT (?)
D. (i)Investigate on the bloc preparation and quanti'ation phases of
RP8G compression process with diagrams wherever necessary (E)
(ii) 8lucidate on the GI@@ and %I@@ image compression formats (E)
UNIT D
". (i)8xplain the principles of perceptual coding ("?)
(ii) Ghy FP$ is not suitable to encode music signalT (,)
,. (i)8xplain the encoding procedure of I,P and + frames in video encoding
with suitable diagrams ("?)
(ii) Ghat are the special features of MP8G J? standards (,)
B. 8xplain the Finear Predictive $oding (FP$) model of analysis and synthesis of
speech signal. .tate the advantages of coding speech signal at low bit rates ("N)
?. 8xplain the encoding procedure of I,P and + frames in video compression
techniques, .tate intended application of the following video coding
standard MP8G J" , MP8G J,, MP8G JB , MP8G J? ("N)
A. (i)Ghat are macro blocs and GS+sT (?)
(ii) Sn what factors does the quanti'ation threshold depend in 1.,N" standardsT
(B)
(iii)<iscuss the MP8G compression techniques (C)
N. (i)<iscuss about the various <olby audio coders (E)
(ii) <iscuss about any two audio coding techniques used in MP8G (E)
D. <iscuss in brief, the following audio coders3
MP8G audio coders (E)
<SFP4 audio coders (E)
E. (i)8xplain the Motion estimation and Motion $ompensation phases of P and
+ frame encoding process with diagrams wherever necessary (",)
(ii) Grite a short note on the Macro +loc format of 1.,N"
compression standard (?)
\\\\\\\\\\\\\\\\\\\\\\\\\

S-ar putea să vă placă și