IT2302- INFORMATION THEORY AND CODING

wheatauditorSoftware and s/w Development

Oct 30, 2013 (3 years and 7 months ago)

1,188 views

IT2302
-

INFORMATION THEORY AND CODING


UNIT


I



1.
A.1

What is prefix coding?


Prefix coding is variable length coding algorithm. In assigns binary digits to the messages as per
their probabilities of occurrence. Prefix of the codeword means any sequence which is initial part
of the codeword. In prefix code, no codeword is the prefix

of any other codeword.


1
.
A.2

State the channel coding theorem for a discrete memoryless channel.


Given a source of „M


equally likely messages, with M>>1, which is generating information t a
rate R. Given channel with capacity C. Then if,

R ≤ C

There

exists a coding technique such that the output of the source may be transmitted over the
channel with probability of error in the received message which may be made arbitrarily small.


1
.
A.3
Explain channel capacity theorem.


The channel capacity of the
discrete memory less channel is given as maximum average mutual
information. The maximization is taken with respect to input probabilities P(x
i
)

C = B log
2
(1+S/N) bits/sec

Here B is channel bandwidth.


1
.
A.4
Define channel capacity of the discrete memory
less channel.


The channel capacity of the discrete memoryless channel is given as maximum average mutual
information. The maximization is taken with respect to input probabilities

C = max I(X:Y)

P(x
i
)


1
.
A.5

Define mutual information.


The mutual
information is defined as the amount of information transferred when x
i
is
transmitted and y
i
is received.it is represented by I (x
i
,y
i
) and given as,

I (x
i
,y
i
) = log P (x
i
/y
i
) bits

P(x
i
)


1
.
A.6

state its two properties of mutual information



The mutual information is symmetric.


I(X;Y) = I(X:Y)



The mutual information is always positive


I(X;Y) ≥ 0


1
.
A.7
Define efficiency of the source encoder.


Efficiency of the source encoder is given as,

ή = Entropy (H) .

Avg. no. of bits in codewor
d(N)






1
.
A.8.

Define code redundancy.


It is the measure of redundancy of bits in the encoded message sequence. It is given as,
Redundancy = 1


code efficiency


= 1


ή

It should be as low as possible.


1
.
A.9.
Define rate of information transmission across the channel.


Rate of information transmission across the channel is given as,

Dt = [H(X)


H(X/Y)]r bits/sec

Here H(X) is the entropy of the source.

H(X/Y) is the conditional entropy.


1
.
A.10.
Define band
width efficiency.


The ratio of channel capacity to bandwidth is called bandwidth efficiency

Bandwidth efficiency = channel capacity (C)

Bandwidth (B)


1
. A.11.

What is the capacity of the channel having infinite bandwidth?


The capacity of such channe
l is given as,

C = 1.44 (S/N
0
)

Here S/N
0
is signal to noise ratio


1
.

A.12

Define a discrete memoryless channel.


For the discrete memoryless channels, input and output, both are discrete random variables. The
current output depends only upon current in
put for such channel.

1
.

A.13.

Find entropy of a source emitting symbols x, y, z with probabilities of 1/5,

1/2, 1/3


respectively.

p1 = 1/5, p2 = 1/2, p3 = 1/3.

H
k
log
2
(1/p
k
)

= 1/5 log
2
5 + 1/2 log
2
2 +1/3 log
2
3

= 1.497 bits/symbol


1
.
A.14
An
alphabet set contains 3 letters A,B, C transmitted with probabilities of
1/3, ¼, 1/4. Find entropy.


p1 = 1/3, p2 = 1/4, p3 = 1/4.

H
k
log
2
(1/p
k
)

= 1/3 log
23
+ 1/4 log
2
4 +1/4 log
2
4

= 1.52832 bits/symbol


15.
A.15.
Define information

Amount of
information:

I
k
= log
2
(1/p
k
)








1
.
A.16.
Write the properties of information



If there is more uncertainty about the message, information carried is also more.



If receiver knows the message being transmitted, the amount of information carried
is zero
.



If I
1
is the information carried by message m
1
, and I
2
is the information carried by m
2
,
then amount of information carried compontely due to m
1
and m
2
is I
1
+I
2


1
.
A.17.
Calculate the amount of information if p
k
= ¼



Amount of
information : I
k
= log
2
(1/p
k
)


= log
10
4


Log
10
2


= 2 bits


1
. A.18.

What is entropy?

Average information is represented by entropy. It is represented by H.

H
k
log
2
(1/p
k
)


1
.
A.19.
Properties of entropy:



Entropy is zero if the event is sure or it is impossible


H = 0 if p
k
= 0 or 1



When p
k
= 1/M for all the „M


symbols, then the symbols are equally likely for such
source entropy is given as H = log
2
M



Upper bound on entropy
is given as,


Hmax = log
2
M


1
.
A.20.
Define code variance

Variance is the measure of variability in codeword lengths. It should be as small as possible.

k
(n
k
-

N )

Here variance code

p
k


probability of Kth symbol

n
k


no. of bits assigned to Kth sym
bol

N


avg. codeword length
















UNIT
-
II


2. A1
. Define Nyquist rate.


Let the signal be band limited to „W


Hz. Then Nyquist rate is given as,

Nyquist rate = 2W samples/sec

Aliasing will not take place if sampling rate is greater than
Nyquist rate.


2. A.2
. What is meant by aliasing effect?


Aliasing effect takes place when sampling frequency is less than nyquist rate. Under such
condition, the spectrum of the sampled signal overlaps with itself. Hence higher frequencies take
the form
of lower frequencies. This interference of the frequency components is called aliasing
effect.


2. A.3
. What is PWM refers.


PWM is basically pulse width modulation. Width of the pulse changes according to amplitude of
the modulating
signal;

it is also referred as pulse duration modulation or PDM.



2. A.4
. State sampling theorem.


Sampling theorem states that, a band limited signal of finite energy, which has no frequency
components higher than W Hz, is completely described by specifying the values of the signal at
instants of time separated by 1/2W seconds and ,

A band limited sig
nal of finite energy, which has no frequency components higher than W Hz,
may be completely recovered from the knowledge of its samples taken at the rate of 2W samples
per second.



2. A.5
. Mention two merits of DPCM.



Bandwidth requirement of DPCM is le
ss compared to PCM.



Quantization error is reduced because of perdition filter.



2. A.6
. What is the main difference in DPCM and DM?


DM encodes the input sample by only one bit. It sends the information about +∂ or
-
∂, ie. Step
rise or fall. DPCM can
have more than one bit for encoding the sample, it sends the information
about difference between actual sample value and predicted sample value.


2. A.7
. How the message can be recovered from PAM?


The message can be recovered from PAM by passing the PAM

signal through reconstruction
filter. The reconstruction filter integrates amplitudes of PAM pulses. Amplitude smoothing of the
reconstructed signal is done to remove amplitude discontinuities due to pulses.


2. A.8
. Write an expressing for bandwidth of b
inary PCM with N message each with
a maximum frequency of f
m
Hz.


If „v


number of bits are used to code each input sample, then bandwidth of PCM is given as,

B
T
≥ N. v . f
m

Here v. f
m
is the bandwidth required by one message.


2.A.
9. How is PDM wave co
nverted into PPM systems?


The PDM signal is given as a clock signal to monostable multivibrator. The multivibrator
triggers on falling edge. Hence a PPM pulse of fixed width is produced after falling edge of
PDM pulse. PDM represents the input signal amp
litude in the form of width of the pulse. A PPM
pulse is produced after this

width


of PDM pulse. In other words, the position of the PPM pulse
depends upon input signal amplitude.


2. A.10
. Mention the use of adaptive quantizer in adaptive digital wavefo
rm coding
schemes.


Adaptive quenatizer

changes its step size accordion to variance of the input signal. Hence
quantization error is significantly reduces due to adaptive quantization. ADPCM uses adaptive
quantization. The bit rate of such schemes is reduces due to adaptive quantization.


2. A.
11
. What do you understand from adaptive coding?


In adaptive coding, the quantization step size and prediction filter coefficients are changed as per
properties of input signal. This reduces the quantization error and number of bits used to
represent the

sample value. Adaptive coding is used for speech coding at low bit rates.


2. A.12
. What is meant by quantization?


While converting the signal value from analog to digital, quantization is performed. The analog
value is assigned to the nearest digit
al level. This is called quantization. The quantized value is
then converted to equivalent binary value. The quantization levels are fixed depending upon the
number of bits. Quantization is performed in every analog to digital conversion.

2. A.13
. The signal to quantization noise ratio in a PCM system depends on ……


The signal to quantization noise ratio in PCM is given as,

(S/N)
dB
≤ (4.8+6v) dB

Here v is the number of bits used to represent samples in PCM. Hence signal to quantization
noise ra
tion in PCM depends upon number of bits or quantization levels.


2. A.14
. For the transmission of normal speech signal in the PCM channel needs the
B.W. of…..


Speech signals have the maximum frequency of 3.4 kHz. Normally 8 bits PCM is used for speed.
Th
e transmission bandwidth of PCM is given as,

B
T
≥ vW

≥ 8 x 3.4 kHz

≥ 27.2 kHz.


2. A.15
. It is required to transmit speech over PCM channel with 8
-
bit accuracy.
Assume the speech in baseband limited to 3.6 kHz. Determine the bit rate?


The signaling ra
te in PCM is given as,

R= v f
s

Here v number of bits ie. 8

The maximum signal frequency is W= 3.6 kHz. Hence minimum sampling frequency will be,

f
s
= 2W

= 2 x3.6 kHz

= 7.2 kHz.

R = 8x7.2x10
3

= 57.6 kbits/sec


2.A.
16
. What is meant by adaptive delta modulation?


In adaptive delta modulation, the step size is adjusted as per the slope of the input signal. Step
size is made high if slope of the input signal is high. This avoids slope overload distortion.


2.A.
17. Delta

modulation of delta modulation over pulse modulation schemes?


Delta modulation encodes one bit per sample. Hence signaling rate is reduced in DM.


2.A.
18. What should be the minimum bandwidth required to transmit a PCM
channel?


The minimum transmissio
n bandwidth in PCM is given as,

B
T
= vW

Here v is number of bits used to represent on pulse.

W is the maximum signal frequency.


2.A.
19. What is the advantage of delta modulation over PCM?


Delta modulation uses one bit to encode one bit to encode one
sample. Hence bit rate of delta
modulation is low compared to PCM.


2.A.
20. How distortions are overcome in AXM?



The slope overload and granular noise occur mainly because of fixed step size in
delta modulator.



Step size is more for fast amplitude ch
anges and step size is less for slowly varying
amplitude.



The step size is varied according to amplitude variations of input signal





















UNIT


III Error Control Coding

3.A.
1. What is hamming distance?


The hamming distance between the two code vectors is equal to the number of elements in which
they differ. For example, let the two code words be,

X = (101) and Y = (110)

These two code words differ in second and third bits. Therefore the banning distanc
e between X
and Y is two.


3.A.
2. Define code efficiency?


The code efficiency is the ratio of message bits in a block to the transmitted bits for that block by
the encoder ie.

Code efficiency = message bits = k

Transmitted bits n


3.A.
3
. What is meant by systematic and non systematic codes?


In a systematic block code, message bits appear first and then check bits. In the nonsystematic
code, message and check bits cannot be identified in the code vector.


3.A.
4. What is meant by linear
code?


A code is linear if modulo
-
2 sum of any two code vectors produces another code vector. This
means any code vector can be expressed as linear combination of other code vectors.


3.A.
5. What are the error detection and correction capabilities of
Hamming codes?


The minimum distance (d
min
) of Hamming codes is „3

. Hence it can be used to detect double
errors or correct single errors. Hamming codes are basically linear block codes with d
min
= 3


3.A.
6. What is meant by cyclic code?


Cyclic codes a
re the subclass of linear block codes. They have the properly that a cyclic shift of
one codeword produces another code word. For example consider the codeword.

X = (x
n
-
1
,x
n
-
2
,……x
1
,x
0
)

Let us shift above codevector to left cyclically,

X


= (x
n
-
2
,x
n
-
3
,……

x
0,
x
1
,x
n
-
1
)

Above codevector is also a valid codevector.

3.A.
7. How syndrome is calculated in Hamming codes and cyclic codes?


In Hamming codes the syndrome is calculated as,

S = YH
T

Here Y is the received and H
T
is the transpose of parity check matrix.

In cyclic code, the syndrome vector polynomial is given as.

S(p) = rem [Y(p)/G(p)]

Here Y(p) is received vector polynomial and G(p) is generator polynomial.








3. A.8
. What is BCH code?


BCH codes are most exte
nsive and powerful error correcting cyclic codes. The decoding of BCH
codes is comparatively simples. For any positive integer „m


and „t

, there exists a BCH code
with following parameters:

Block length : n = 2
m
-
1

Number of parity check bits : n
-
k ≤ mt

Minimum distance : d
min
≥ 2t + 1



3. A.9
. What is RS code?


These are non binary BCH codes. The encoder for RS codes operates on multiple bits
simultaneously. The (n,k) RS code takes the groups of m
-
bit symbols of the incoming binary
data stream. It tak
es such „k


number of symbols in one block. Then the encoder adds (n
-
k)
redundant symbols to form the code word of „n


symbols.

RS code has :

Block length : n = 2
m
-
1

Message size : k symbols

Number of parity check bits : n
-
k= 2t

Minimum distance : d
mi
n
= 2t + 1



3. A.10
. What is the difference between block codes and convolutional codes?


Block codes take „k


number of message bit simultaneously and form „n


bit code vector is also
called block. Convolutional

code takes one message bit at a time and generates

two or more encoded bits. Thus convoutional codes generate a string.


3.A.
11. Define constraint length in convolutional codes.


Constraint length is the number of shifts over which the single message bi
t can influence the
encoder output. It is expressed in terms of message bits.



3.A.
12. Define free distance and coding gain.


Free distance is the minimum distance between code vectors. It is also equal to minimum weight
of the code vectors.

Coding gain

is used as a basis of comparison for different coding methods. To achieve the same
bit error rate the coding gain is defined as,

A = [E
b
/N
o
] encoded

[E
b
/N
o
] coded



3. A.13
. Why cyclic codes are extremely well suited for error detection?



They are eas
y to encode



They have well defined mathematical structure. Therefore efficient decoding
schemes are available.



3. A.14
. What is syndrome?

Syndrome gives an indication of errors present in received vector „Y

. If YH
T
= 0, then there are
no errors

in „Y


and it is valid code vector. The non zero value of YH
T
is called „syndrome

. Its
non zero value indicates that „Y


is not a valid code vector and it contains errors.






3. A.15
. Define dual code.


Let there be (n,k) block code. It satisfies HG
T
=
0. Then the (n,n
-
k) i.e. (n,q) block code is called
dual code. For every (n,k) block code, there exists a dual code of size (n,q).



3. A.16
. Write syndrome properties of liner block codes.



Syndrome is obtained by S = YH
T
.



If Y = X, then S = 0 ie
. No error in output.



If Y ≠ X, then S ≠ 0 ie. There is an error in output.



Syndrome depends upon the error pattern only, ie. S = EH
T





3. A.17
. What is Hamming code? Write its conditions

Hamming codes are (n,k) liner block codes with following con
ditions:



Number of check bits q ≥ 3



Block length n = 2
q


1



Number of message bits k = n


q



Minimum distance d
min
= 3



3.A.
18. List the properties of generator polynomial of cyclic codes.



Generator polynomial is a factor x(p) and (p
n
+1)




Code polynomial, message polynomial and generator polynomial are related by,
X(p) = M(p) G(p)



Generator polynomial is of degree „q





3.A.
19. What is hadamard code

The hadamard code is derived form hadamard matrix. The hadamard matrix is the n x n squ
are
matrix. Rows of this hadamard matrix represent code vectors. Thus a n x n hadamard matrix,
represents „n


codes vector of „n


bits each. If the bolock of message vector contains „k


bits,
then

N = 2
k


3.A.
20. Write the advantage of extended code

This

code can detect more number of errors compared to normal (n,k) block code. But it cannot
used in error correction.






UNIT


IV


4. A.1
. State the main application of Graphics Interchange Format(GIF)


The GIF format is used mainly with internet to

represent and compress graphical images. GIF
images can be transmitted and stored over the network in interlaced mode, this is very useful
when images are transmitted over low bit rate channels.



4. A.2
. Explain Runlength encoding.


The runlength

encoding is siplest lossless encoding techniques. It is mainly used to compress text
or digitized documents. Binary data strings are better compressed by runlength encoding.
Consider the binary data string

1111110000011111……….

If we apply runlength codi
ng to abouve data string, we get,

7,1; 6,0; 5,1; 3,0……

Thus there are seven binary 1

s, followed by six binary 0

s followed by five binary 1

s and so
on.


4. A.3
. What is JPEG standard?

JPEG stands for joint photographic exports group. This has develope
d a standard for
compression of monochrome/color still photographs and images. This compression standard is
known as JPEG standard. It is also know as ISO standard 10918. It provides the compression
ratios upto 100:1.



4. A.4
. Why differential encoding is carried out only for DC coefficient in JPEG?



The DC coefficient represents average color/luminance/ chrominance in the
corresponding block. Therefore it is the largest coefficient in the block.



Very small physical area
is covered by each block. Hence the DC suitable
compressions for DC coefficients do not vary much from one block to next block.



The DC
coefficient varies

slowly. Hence differential encoding is best suitable
compression for DC coefficients. It encodes th
e difference between each pair of values rather
than their absolute values.



4. A.5
. What do you understand by “GIF interlaced node”.

The image data can be stored and transferred over the network in an interlaced mode. The data is
stored in such a way th
at the decompressed image is built up in progressive way.



4. A.6
.
Explain in brief “spatial frequency” with the aid of a diagram.

The rate of change of pixel magnitude along the scanning line is called spatial frequency.



4. A.7
. Write the advantages o
f Data compression



Huge amount of data is generated in text, images, audio, speech and video.



Because of compression, transmission data rate is reduced.



Storage becomes less due to compression. Due to video compression, it is possible to
store one

complete movie on two CDs.



Transportation of the data is easier due to compression.



4.A.
8. Write the drawbacks of Data compression



Due to compression, some of the data is lost.



Compression and decompression increases complexity of the tran
smitter and

Rxer.





Coding time is increased due to compression and decompression.



4.A.
9. Compare lossless and lossy compression


S.No.

Lossless compression

Lossy compression

1.

No information is lost

Some information is lost

2.

Completely reversible

It is not reversible

3.

Used for text and data

Used for speed and video

4.

Compression ratio is less

High compression ratio

5.

Compression is independent of human
response

Compression depends upon sensitivity of
human ear, eyes…
=
=

4.A.
10. Compare static coding and dynamic coding


S.No.

Static coding

Dynamic coding

1.

Codewords are fixed throughout
compression

Codewords change dynamically during
compression

2.

Statistical characteristics of the data are
known

Statistical characteristics of the data are not
known

3.

Receiver knows the set of codewords

Receiver dynamically calculates the
codewords

4.

Ex: Static Huffman coding

Ex: Dynamic Huffman coding



4.A.
11. Write the principle of static Huffman coding


In static Huffman coding, the character string to be transmitted is analyzed. The frequency of
occurrence of each character is determined. The variable length codewords are then assigned to
each chara
cter. The coding operation creates an unbalanced tree. It is also called Huffman coding
tree.


4.A.
12. How arithmetic coding is advantages over Huffman coding for text
compression?


S.No.

arithmetic coding

Huffman coding

1.

Codes for the characters are derived.

Coding is done for messages of short
lengths

2.

Shannon

s rate is achieve搠 潮ly if
character 灲潢o扩lities are⁡ll integer
灯pers 潦 ㄯ㈠

Shann潮

s rate is always achieve搠
irres灥ctive 潦 灲潢o扩lities 潦 character


㌮3

Precisi潮 潦 the c潭灵ter 摯ds潴 affect
c潤ong

Precisi潮 潦 the c潭灵ter 摥termine length
潦 the character string that can 批 enc潤o搠

4.

Huffman coding is the simple technique

Arithmetic coding is complicated


4. A.13
. Define compression

Large amount of data is generated in the form of text, images, audio, speech and video

.


4. A.14
. What is the principle of data compression

Compression Decompression

Network

Information

source

Source encoder

Destination decoder


Receiver



4. A.15
. What are the types of compression

The compression can be of two types:

Lossless compression and lossy compression

Lossless compression :
In compression, no part of the original information is lost during
compression. Decompression
produces original information exactly

Lossy compression:
In lossy compression some information is lost duration compression. Hence
decompression does not produce original information exactly.



4. A.16
. What are „make
-
up codes


慮d termin慴i潮 c潤es in di杩tiz慴ion 潦
d潣uments㼠


Make
-
up codes and termination codes gives codeword for contiguous white and black pels along
the scanned line.

Termination code:
there codes give codewords for black and white runlengths from 0 to 63
in
steps of 1 pel.

Make
-
up codes:
these codes give codewords for black and white runlengths that are multiples of
64 pels.



4. A.17
. What are JPEG
standards?



The JPEG stand for Joints Photographic Exports Group(JPEG). This group is working on
internati
onal compression standard for colour and monochrome continuous tone still images,
photographs etc. this group came up with a compression standard, which is widely known as
JPEG standard. It is also known as ISO standard 10918.



4. A.18
. What are the types of JPEG algorithms


There are two types of JPEG algorithms.

Baseline JPEG:
During decoding, this algorithm draws line until complete image is shown.

Progressive JPEG:
During decoding, this JPEG algorithm draws the whole image at onc
e, but
in very poor quality. Then another layer of data is added over the previous image to improve its
quality. Progressive JPEG is used for images on the web.the used can make out the image before
it is fully downloaded.


4.A.
19. Draw the block diag
ram of JPEG encoder



Encoded image
data (
JPEG)


Source image


Block and image preparation



DCT


Quantization



Entropy encoding



Frame building


4.A.
20
. What type of encoding techniques is applied to AC and DC co
-

efficient in
JPEG?



The DC coefficients have normally large amplitudes. They vary slowly from block to
block. Differential encoding becomes very efficient for such data. It encodes only the d
ifference
among the coefficients.



The AC coefficients are remaining 63 coefficients in each block. They are fast
varying. Hence
run length

encoding proves to be efficient for such data.































UNIT


V


2
. A.1
. What is dolby

AC
-
1?


Dolby AC
-
1 is used for audio coding. It is MPEG audio coding standard. It uses psychoacoustic
model at the encoder and has fixed bit allocations to each subband.


5. A.2
. What is the need of MIDI standard?


The MIDI stands for musical instrument
digital
interface (
MIDI). It normally specifies the details
of digital interface of various musical instruments to micro
-
computer. It is essential to access,
record or store the music generated from musical instruments.



5. A.3
. What is perceptual coding?



In perceptual coding only perceptual feature of the sound are stored. This gives high degree of
compression. Human ear is not sensitive to all frequencies equally. Similarly masking of weaker
signal takes place when louder signal to present nearby. Thes
e parameters are used in perceptual
coding.



5. A.4
. Explain CELP principles.



CELP uses more sophisticated model of vocal tract.



Standard audio segments are stored as waveform templates. Encoder and decoder
both have same set of templates. It is cal
led codebook.



Every digitized segment is compared with waveform templates in code book.



The matching template is differentially encoded and transmitted.



At the receiver, the differentially encoded codeword selects the matching template
from codebo
ok.



5. A.5
. What is significance of D
-

frames in video coding



The D


frames are inserted at regular intervals in the encoded sequence of frames.
These are highly compressed and they are ignored during decoding „p


and „B


frames.



The D
-

frames consists of only DC coefficients and hence they generate low
resolution picture.




The low resolution pictures generated by D


frames are useful in fast forward and
rewind applications.



5.A.
6. Define the terms “GOP” and “Prediction
span” with reference to video
compression.

GOP (Group of Picture):
the number of fames or pictures between two successive I
-
frames is
called group of picture or GOP. Typical value of GOP varies from 3 to 12.

Prediction Span:
The number of frames between
a P
-
frame and the immediately preceding I or
P frame is called prediction span. The typical value of prediction span lies from 1 to 3.



5. A.7
. Define the terms “processing delay” and “algorithmic delay” with respect to
speech coders.

Processing delay: I
t is the combined time required for (i) analyzing each block of digitalized
samples at encoder and (ii) reconstruction of speech at decoder.

Algorithmic delay: It is the time required to
accumulate

each block of samples in the memory.




5. A.8
. What do yo
u understand by frequency masking?


The strong signal reduces level of sensitivity of human ear to other signals which are near to it in
frequency. This effect is called frequency masking.



5. A.9
. Find the average compression ratio of the GOP which has
a frame sequence
IBBPBBPBBPBB where the individual compression ratios of I, P and B are 10: 1, 20: 1, 50:
1 respectively.


There are total 12 frames of which I
-
frames are 1, P
-
frames are 3 and B
-
frames are 8. Hence
average compression ratio will be,

Avg

CR = (1x(1/10)) + (3x(1/20)) + (8x(1/50))

12

= 0.0342



5. A.10
. What is perceptual coding?


In perceptual coding, the limitations of human ear are exploited. We know that human ear can
listen very small sound when there is complete silence. But if oth
er big sounds are present, then
human ear can not listen very small sounds. These characteristics of human ear are used in
perceptual coding. The strong signal reduces level of sensitivity of the ear to other signals which
are near to it in frequency. This

effect is called frequency masking.



2
. A.11
. What is code excited LPC?


The code excited LPC uses more sophisticated model of a vocal tract. Therefor the generated
sound is more nature. The sophisticated version of vocal ract is known as code excited l
inear
prediction (CELP) model.



5. A.12
. Define pitch and period.


Pitch:
The pitch of the signal gives an information about fundamental frequency. Pitch of every
person is different. However it is in the similar range for males and some another rage for

females.

Period:
This is the time duration of the signal. It is also one of the important feature.



5. A.13
. List the application of LPC.



Since the generated sound is very synthetic, it is used mainly for military purposes.



LPC synthesis is used i
n applications which require very small bandwidth.





5. A.14
. List the four international standards based on CELP.


They are ITU
-
T recommendations G.728, G.729, G.729(A) and G.723.1





5. A.15
. What is meant by temporal masking?

When ear hears the loud sound, certain time has to be passed before it hears quieter sound. This
is called temporal masking.



5. A.16
. What is MPEG?


MPEG stands for Motion Pictures Expert Group(MPEG). It was formed by ISO. MPEG has
developed the standar
ds for compression of video with audio. MPEG audio coders are used for
compression of audio. This compression mainly uses perceptual coding.



3
.A.
17. Draw the frame format in MPEG audio encoder.



5.A.
18. Write the advantages and applications of Dolby AC
-
1.


Advantages:



Simple encoding scheme du to fixed bit allocations



Reduced compressed bit rate since frames does not include bit allocations. The
typical compressed bit rate is 512 kbps for two channel stereo signal.


Applications:



It is used in satellites for FM radio.



It is also used for compression of sound associated with TV programs.



5.A.
19. Write the advantages and disadvantages of Dolby AC
-
2


Advantages:



Hit allocations are not transmitted in the frame.



Bit rate of

encoded audio is higher than MPEG audio coding.


Disadvantages:



Complexity is more since psychoacoustic model and spectral envelope
encoder/decoders and used.



Subband

samples are encoded and transmitted in the frame. Hence bit rate of
compressed data is slightly reduced.



It cannot be used for broadcast applications since encoder and decoder both contain
psychoacoustic model. Therefore encoder cannot be modifier easi
ly.






5.A.
20. Define I,P and B frames.


I
-
frames:
It is also known as intracoded frame. It is normally the first frame in the new scene.

P
-
frames:
It is also known as predictive frame. It basically predicts the movement of objects
with respect to

I
-
frame.

B
-
frame:
It is also known as bidirectional frame. These frames relate the motion of the objects
in preceding as well as succeeding frames.




PART
-

B (16 Marks)

UNIT


I



1. B.1
.

Explain briefly the source coding theorem
.
(6)





Source
Coding


1. Source symbols encoded in binary


2. The average codelength must be reduced


3. Remove redundancy ) reduces bit
-
rate


Consider a discrete memoryless source on the alphabet


S = {s0, s1, ∙ ∙ ∙ , sk}


Let the corresponding probabilities be


{p0, p1, ∙ ∙
∙ , pk}


and codelengths be


{l0, l1, ∙ ∙ ∙ , lk}.


Then, the average codelength(average number of bits per symbol)


of the source is defined as


K
-
1


∑ P
K
l
I


K=0


If Lmin is the minimum possible value of ¯L, then the coding


efficiency of the source is given by


Lmin


¯L



1. B.1.b
.Write

channel coding theorem

and channel capacity theorem (10
)

The Shannon theorem states that given a noisy channel with channel capacity
C

and
information transmitted at a rate
R
, then if
R

<
C

th
ere exist codes that allow the probability of
error at the receiver to be made arbitrarily small. This means that theoretically, it is possible to
transmit information nearly without error at any rate below a limiting rate,
C
.

The converse is also importan
t. If
R

>
C
, an arbitrarily small probability of error is not
achievable. All codes will have a probability of error greater than a certain positive minimal
level, and this level increases as the rate increases. So, information cannot be guaranteed to be
t
ransmitted reliably across a channel at rates beyond the channel capacity. The theorem does not
address the rare situation in which rate and capacity are equal.

The channel capacity C can be calculated from the physical properties of a channel; for a
band
-
limited channel with Gaussian



1. B.2
...

A discrete memory less source has an alphabet of seven symbols whose


probabilities of occurrence are as described below


Symbol: s0 s1 s2 s3 s4 s5 s6


Prob : 0.25
0.25 0.0625 0.0625 0.125 0.125 0.125

(i) Compute the Huffman code for this source moving a combined symbols as high as

possible (10)

(ii) Calculate the coding efficiency (4)

(iii) Why the computed source has a efficiency of 100% (2)
[Refer Cla
ss Notes]



1. B.3.a
.

Consider the following binary sequence
s 111010011000101110100.Use Lempel


Ziv algorithm to encode this sequence. Assume that the binary symbols 1 and 0 are


already in the code book (12)



1. B.3.b.What

is the advantage

of Lempel


Ziv encoding algorithm over


Huffman

coding? (4)

In general, if we have a random source of data (= 1 bit entropy/bit),
no

encoding,
including Huffman, is likely to compress i
t on average. If Lempel
-
Ziv were
"perfect" (which it approaches for most classes of sources, as the length goes to
infinity), post encoding with Huffman wouldn't help. Of course, Lempel
-
Ziv
isn't

perfect, at least with finite length, and so some redundancy

remains.

It is this remaining redundancy which the Huffman coding partially eliminates
and thereby improves compression.











1. B.4...

A discrete memory less source has an alphabet of five symbols with their


probabilities for

its output as given here


[X] = [x1 x2 x3 x4 x5 ]


P[X] = [0.45 0.15 0.15 0.10 0.15]


Compute two different Huffman codes for this source .for these two codes .Find


(i) Average code word length


(ii) Variance of the average code word length over the ensemble of source


Symbols

(16)
[Refer Class Notes]



1. B.5
.
. A discrete memory less sour
ce X has five symbols x1,x2,x3,x4 and x5 with


probabilities p(x1)


0.4, p(x2) = 0.19, p(x3) = 0.16, p(x4) = 0.15 and p(x5) = 0.1


(i) Construct a Shannon


Fano code for X,and Calculate the efficiency



of the code (7)


(ii) Repeat for the Huffman code and Compare the results (9)
[Refer Class Notes]







1. B.6
.

Consider that two sources S1 and S2 emit message x1, x2, x3 and y1, y2,y3




with

joint probability P(X,Y) as shown in the matrix form. 3/40 1/40 1/40


P(X, Y) 1/20 3/20 1/20 1/8 1/8 3/8


Calculate the entropies H(X), H(Y), H(X/Y), and H (Y/X) (16)
[Refer Class Notes]



1. B.7
. Apply Huffman

coding procedure to following massage ensemble and determine


a
verage length of encoded message also. Determine the coding efficiency. Use


coding alphabet D=4.there are 10 symbols.



X = [x1, x2, x3……x10]


P[X] = [0.18,.0.17,0.16,0.15,0.1,0.08,0.05, 0.05,0.04,0.2] (16)
[Refer Class Notes]




UNIT II


2.B.1
. (i)
Compare and contrast DPCM and ADPCM (6)


DPCM:
Stores a multibit

difference value. A bipolar D/A converter is used for
playback to convert the successive difference values to an analog waveform
.


ADPCM:
Stores a difference value that has been mathematically adjusted
according to the slope of the input waveform. Bipolar D/A converter is used to convert
the stored digital code to analog for playback.


2.B.1
(ii)

Define pitch, period and loudness (
8
)





Pitch is a term used to describe how high or low a note a being played by a
musical instrument or sung seems to be.



The pitch of a note depends on the frequency of the source of the sound.



Frequency is measured in Hertz (Hz), with
one vibration per second being equal to one
hertz (1 Hz).



A high frequency produces a high pitched note and a low frequency produces a low
pitched note.



Loudness depends on the amplitude of the sound wave.



The larger the amplitude the more energy the sound

wave contains therefore the louder
the sound.

2. B.1. (
iii)

What is decibel? (2)




The
decibel

(
dB
) is a
logarithmic unit

that indicates the ratio of a physical


quantity (
usually
power

or
intensity
) relative to a specified or implied

reference level





2. B.2
.
(i)
Explain delta modulation with examples (
8
)


Delta modulation (DM or Δ
-
modulation) is an analog
-
to
-
digital and digital
-
to
-
analog signal
conversion technique used for transmission of voice information where quality is not of
primary
importance. DM is the simplest form of differential pulse
-
code modulation (DPCM) where the
difference between successive samples is encoded into n
-
bit data streams. In delta modulation,
the transmitted data is reduced to a 1
-
bit data stream.




It
s main features are:

* the analog signal is approximated with a series of segments

* each segment of the approximated signal is compared to the original analog wave to determine
the increase or decrease in relative amplitude

* the decision process for esta
blishing the state of successive bits is determined by this
comparison

* only the change of information is sent, that is, only an increase or decrease of the signal
amplitude from the previous sample is sent whereas a no
-
change condition causes the modulat
ed
signal to remain at the same 0 or 1 state of the previous sample.

To achieve high signal
-
to
-
noise ratio, delta modulation must use oversampling techniques, that
is, the analog signal is sampled at a rate several times higher than the Nyquist rate.

Deriv
ed forms of delta modulation are continuously variable slope delta modulation, delta
-
sigma
modulation, and differential modulation. The Differential Pulse Code Modulation is the super set
of DM.



2. B.2 (
ii)

Explain sub
-
band adaptive differential pulse code modulation (6)


Sub
-
band adaptive differential pulse code modulation (SB
-
ADPCM) is a 7kHz wideband
speech codec based on the sub
-
band coding of two ADPCM channels.




A basic SBC scheme


To enable higher quality compression, one may use
sub band

coding. First, a digital filter bank
divides the input signal spectrum into some number (e.g., 32) of
sub bands
. The psychoacoustic
model looks at the energy in each of these
sub bands
, as well as
in the original signal, and
computes masking thresholds using psychoacoustic information. Each of the
sub band

samples is
quantized and encoded so as to keep the quantization noise below the dynamically computed
masking threshold. The final step is to form
at all these quantized samples into groups of data
called frames, to facilitate eventual playback by a decoder.

Decoding is much easier than encoding, since no psychoacoustic model is involved. The frames
are unpacked, sub band samples are decoded, and a f
requency
-
time mapping reconstructs an
output audio signal.

Over the last five to ten years, SBC systems have been developed by many of the key companies
and laboratories in the audio industry. Beginning in the late 1980s, a standardization body called
the
Motion Picture Experts Group (MPEG) developed generic standards for coding of both audio
and video. Sub band coding resides at the heart of the popular MP3 format (more properly
known as
MPEG
-
1 Audio Layer III
).



SB
-
ADPCM is defined in G.722 of t
he International Telecommunication Standardization
Sector (ITU
-
T) standards. In November 1988, G.722 was approved by ITU
-
T. SB
-
ADPCM is
used to transmit a large amount of voice data.




2. B.3
. With the block diagram explain DPCM system. Comp
are DPCM with PCM &


DM systems (16)




D
ifferential
p
ulse
c
ode
m
odulation (DPCM) is a procedure of converting an analog


into a digital signal in which an analog signal is sampled and then the
difference between the
actual sample value and its predicted value (predicted value is based on previous sample or
samples) is quantized and then encoded forming a digital value.

DPCM code words represent differences between samples unlike PCM where code w
ords
represented a sample value.


Basic concept of DPCM
-

coding a difference, is based on the fact that most source signals show
significant correlation between successive samples so encoding uses redundancy in sample
values which implies lower bit rate.

Realization of basic concept (described above) is based on a technique in which we have to
predict current sample value based upon previous samples (or sample) and we have to encode the
difference between actual value of sample and predicted value (the di
fference between samples
can be interpreted as prediction error).

Because it's necessary to predict sample value DPCM is form of predictive coding.

DPCM compression depends on the prediction technique, well
-
conducted prediction techniques
lead to good comp
ression rates, in other cases DPCM could mean expansion comparing to
regular PCM encoding.

Delta modulation

D
elta
m
odulation (DM )is a subclass of differential pulse code modulation. It can be viewed as a
simplified variant of DPCM, in which 1
-
bit quantize
r is used with the fixed first order predictor,
and was developed for voice telephony applications.

Principle of DM : DM output is 0 if waveform falls in value, 1 represents rise in value, each bit
indicates direction in which signal is changing (not how m
uch), i.e. DM codes the direction of
differences in signal amplitude instead of the value of difference (DPCM).

Basic concept of delta
modulation

can be explained in the

DM block diagram shown in Fig
.

An illustration of DPCM's advantages over PCM

A typical example of a signal good for DPCM is a line in a
continuous
-
tone

(photographic)
image

which mostly contains smooth tone transitions. Another example would be an
audio

signal
with a
low
-
biased

frequency spectrum.

For illustration, we present two h
istograms made from the same picture which were coded in two
ways. The histograms show the PCM and DPCM
sample frequencies
, respectively.

On the
first histogram(Fig 4.)
, a large number of samples has a significant frequency and we
cannot pick only a few of

them which would be assigned shorter code words to achieve
compression. On the
second histogram
(Fig 5.), practically all the samples are between
-
20 and
+20, so we can assign short code words to them and achieve a solid
compression rate
.





2.B.
4.

(i
). Explain DM systems with block diagram (8)


Delta modulation

D
elta
m
odulation (DM )is a subclass of differential pulse code modulation. It can
be viewed as a simplified variant of DPCM, in which 1
-
bit quantizer is used with the
fixed fir
st order predictor, and was developed for voice telephony applications.

Principle of DM : DM output is 0 if waveform falls in value, 1 represents rise in value,
each bit indicates direction in which signal is changing (not how much), i.e. DM codes
the dire
ction of differences in signal amplitude instead of the value of difference
(DPCM).

Basic concept of delta modulation can be explained in the

DM block diagram shown in
Fig
.



2.B.4
(ii)

Consider a sine wave of frequency fm and amplitude Am, which is appli
ed to a


delta modulator of step size Δ .Show that the slope overload distortion will occur


Information Coding Techniques if Am > Δ / ( 2fmTs) Where Ts sampling. What is


the

maximum power that may be transmitte
d without slope overload distortion? (8)



[Refer Class Notes]


2.B.
5
. Explain adaptive quantization and prediction with backward estimation in ADPCM


system with block diagram (16)




ADPCM (adaptive differential pulse
-
code modulation) is a technique for converting sound or
analog

information to
binary

information (a string of 0's and 1's) by taking frequent samples of
the sound and expressing the value of the sampled sound modulation i
n binary terms. ADPCM is
used to send sound on fiber
-
optic long
-
distance lines as well as to store sound along with text,
images, and code on a CD
-
ROM.





2. B.6
. What is modulation? Explain how the adaptive delta modulator works with


different

algorithms? Compare delta modulation with adaptive delta modulation (16)






Modulation

is the process of varying one or more properties of a periodic


waveform
, called the
carrier signal
,



D
elta
m
odulation (DM )is a subclass of differential pulse code modulation. It can be

viewed as a simplified variant of DPCM, in which 1
-
bit quantizer is used with the fixed first
order predictor, and was developed for voice telephony applications.

Principle of DM : DM output is 0 if waveform falls in value, 1 represents rise in value, each bit
indicates direction in which signal is changing (not how much), i.e. DM codes the direction of
differences in signal amplitude instead of the value of differ
ence (DPCM).








Adaptive delta modulation (ADM) or
continuously variable slope delta modulation

(CVSD) is a modification of DM in which the step size is not fixed. Rather, when several
consecutive bits have the same direction value, the encoder and

decoder assume that slope
overload is occurring, and the step size becomes progressively larger. Otherwise, the step size
becomes gradually smaller over time. ADM reduces slope error,at the expense of increasing
quantizing error.This error can be reduced
by using a low pass filter.





2.B
.
7

Explain pulse code modulation and differential pulse code modulation (16)



Differential pulse
-
code modulation (DPCM)

is a signal encoder that uses the baseline
of
pulse
-
code modulation

(PCM) but adds some functionality based on the prediction of the
samples of the signal. The input can be an
analog signal

or a
digital signal
.



If the input is a
continuous
-
time

analog signal, it needs to be
sampled

first so that a
discrete
-
time
signal

is the input to the DPCM encoder.



Option 1: take the values of two consecutive samples; if they are analog samples,
quantize

them; calculate the difference between the first one and the next; the output is
the difference, and it can be further
entropy co
ded
.



Option 2: instead of taking a difference relative to a previous input sample, take the
difference relative to the output of a local model of the decoder process; in this option,
the difference can be quantized, which allows a good way to incorporate a

controlled loss
in the encoding.

Applying one of these two processes, short
-
term redundancy (positive correlation of nearby
values) of the signal is eliminated; compression ratios on the order of 2 to 4 can be achieved if
differences are subsequently entr
opy coded, because the entropy of the difference signal is much
smaller than that of the original discrete signal treated as independent samples.

Pulse
-
code modulation

(
PCM
) is a method used to
digitally

represent sampled
analog
signals
. It is the standard

form of
digital audio

in computers,
Compact Discs
,
digital telephony

and other digital audio applications. In a PCM stream, the amplitude of the analog signal is
sampled regularly at uniform intervals, and each sample is
quantized

to the nearest value wit
hin a
range of digital steps.


PCM streams have two basic properties that determine their fidelity to the original analog signal:
the
sampling rate
, the number of times per second that samples are taken; and the
bit depth
,
which determines the number of p
ossible digital values that each sample can take.




UNIT III




3.B.1
. Consider a hamming code C which is determined by the parity check matrix


1 1 0 1 1 0 0


H = 1 0 1 1 0 1 0


0 1 1 1 0 0 1


(i
) Show that the two vectors C1= (0010011) and C2 = (0001111) are code words


of C and Calculate the hamming distance between them (
8
)


(ii) Assume that a code word C was transmitted and that a vector r = c + e is


recei
ved. Show that the syndrome S = r. H
T
only depends on error vector e. (4)


(iii) Calculate the syndromes for all possible error vectors e with Hamming


weight <=1 and list them in a table. How can this be used to correct a single b
it


error in an arbitrary position? (4)
[Refer Class Notes]




3.B
.
2

(i) Define linear block codes (
1
2)



We assume that the output of an information source is a

sequence of binary digits "0" or "1." In
bl
ock coding, this binaryinformation sequence is segmented into message blocks of fixed

length; each message block, denoted by
u,
consists of kinformation digits.There are a total of 2k
distinct messages. The encoder,according

to certain rules, transforms each input message
u

into a binary n
-
tuple
v
with n> k. This binary n
-
tuple
v
isreferred to as the code word (or code
vector) of the message
u ,
as shown in Figure 1.Therefore, corresponding to the 2k possible
messages, there a
re2k code words. This set of 2k code words is called a block code.

For a block code to be useful, the 2k code words must be distinct.Therefore, there should be a
one
-
to
-
one correspondence between

a message
u
and its code word
v.





3.B.2.
(ii)How to find the parity check matrix? (4)

In
coding theory
, a
parity
-
check matrix

of a
linear block code

C

is a
generator matrix

of
the
dual code
. As such, a codeword
c

is in
C

if and only if

the matrix
-
vector product
H
c
t

=
0
.

The rows of a parity chec
k matrix are
parity checks

on the
code words

of a code. That is, they
show how linear combinations of certain digits of each codeword equal zero. For example, the
parity check matrix


specifies that for each codeword, digits 1 and 2 should sum to zero
(according to the second row)
and digits 3 and 4 should sum to zero (according to the first row).




3. B.3
.

Consider the generator of a (7,4) cyclic code by generator polynomial



g(x)


1+x+x3.Calculate the cod
e word for the message sequence 1001 and


Construct systematic generator matrix G. (8)
[Refer Class Notes]


3.B.4
. (i)Find a (7,4) cyclic code to encode the message sequence (10111) using generator



matrix g(x) = 1+x+
x
3
(8)
[Refer Class Notes]



(ii) Calculate the systematic generator matrix to the polynomial g(x) = 1+x+x
3.
Also

draw the encoder diagram (8)



3.B.5
.

Verify whether g(x) = 1+x+x
2
+x
3
+x
4
is a valid generator polynomial for




generating

a cyclic code for message [111] (16)




3.B.6
.

A convolution encoder is defined by the following generator polynomials:



g
0
(x) = 1+x+x
2
+x
3
+x
4


g
1
(x) = 1+x+x
3
+x
4


g
2
(x) = 1+x
2
+x
4


(i) What is the constra
int length of this code? (4)


(ii) How many states are in the trellis diagram of this code ( 8)


(iii) What is the code rate of this code? (4)



3. B.7.
Explain H.261 standard



Video Compression


H.261

Contents

H.261 Overview

Motivation


Features of H.261

H.261 Coder

Summary

References

H.261 Overview

An ITU
-
T H
-
series standard applicable to videophone or video conferencing



Video coding algorithm is designed for transport using the
Real
-
time Transport
Protocol (RTP)



Operates in real
-
time with limited delay



Transmission bit rate is at multiples of 64Kbit/s

ITU
-
T (International Telecom Union) Recommendation in 1990

Precursor to coding schemes found in H.263 and MPEG
-
1

H.261 (p x 64)
Motivation


Uncompressed video and audio data are huge


Compression ratio of lossless methods is not high enough


Target networks are p * 64Kbps, 1

p
30



64Kbps (p=1) <
data rate

< 1920Kbps (p=30)



Covers transmission from ISDN base rate (64 Kbps) up past


the T
-
1 data rate (1.54 Mbps)



Maximum delay of 150 ms


Coding algorithm is a hybrid of:



Inter
-
picture prediction

-

removes temporal redundancy




Transform coding

-

removes the spatial redundancy




Motion compensation



uses motion vectors to help the


codec compensate for motion


Data rate can be set between 40 Kbit/s and 2 Mbit/s


Input signal format



CIF (Common Intermediate Format)
and
QCIF (Quarter CIF)


available


Features of H.261


Source fo
rmat

Bit rate



The target bit rate is ~ 64Kbps to 1920Kbps

Picture formats



CIF (Common Intermediate Format)


NTSC & PAL



QCIF (Quarter Common Intermediate Format)




At 29.97 frames per second with 4:2:0 chrominance sub
-
sampling (Y:C
B
:C
R
)


Features of H.261


Video multiplex







arrangement

Group of blocks structure



Picture


coded as luminance and two color difference components (Y, C
B

and
C
R
)



Group of blocks (GOB)



Macro Block (MB)

H.261
-

Video multiplex arrangement

4
layers in the c
ompressed stream



Picture, GOB (Group Of Block), MB (MacroBlock), Block


H.261 Picture Syntax (1/4)


H.261 Picture Syntax (2/4)

H.261 Picture Syntax (3/4)

H.261 Picture Syntax (4/4)

H.261 Coding Frame Types

Intra
-
Encoded Frames

(I
-
Frames)



Similar to JPEG compression



Spatial filtering


still objects (transform coding)



Ex.:
Black and White background pattern


Predicted Frames (P
-
Frame)



Predicted based on earlier frame



Temporal filtering


Inter
-
picture prediction



Ex.:
Applicatio
n User

H.261
-

Motion Compensation

Assumption



As parts of an image moves, its colors stays mostly constant


Idea



Find similar parts in other images



Encode where it was found (i.e. motion vector)



Previous decoded image


Reference image



Image to code


Target Image



Encode the residual only


H.261 Coder



Techniques used:



Two dimensional (2
-
D) 8 X 8 DCT to remove intra
-
frame correlation



Zig
-
zag order to scan the transform coefficients



Run Length coding for zero
-
valued coefficients
after quantization



Motion estimation is applied to video sequence to improve the prediction
between successive frames



Transmission rates control in the range of p X 64 Kbps



Error resilience including synchronization and concealment technique required
in tr
ansmission code, to cover up channel errors



Common Intermediate Format (CIF) and Quarter CIF (QCIF) for a single
solution to different video formats (NTSC / PAL)






3.B.
8.

Construct a convolution encoder for the following specifications: rate



efficiency

=1/2



Constraint length =4.the connections from the shift to modulo 2 adders


are

described

by following equations g
1
(x) = 1+x g
2
(x) = x


Determine the output codeword f
or the input

message [1110] (1

[ Refer Class Notes]







UNIT IV



4
.B.
1
. (i)Discuss the various stages in JPEG standard (
16)









JPEG



The JPEG Standard



JPEG is an image compression standard which was accepted as an international standard
in 1992.



Developed by the Joint Photographic Expert Group of the ISO/IEC



For coding and compression of color/gray scale images



Yields acceptable compression in the 10:1 range



The JPEG Standard



JPEG is a lossy compression technique



Based on the DCT



JPEG is a general

image compression technique independent of



Image resolution



Image and pixel aspect ratio



Color system



Image compexity



A scheme for video compression based on JPEG called Motion JPEG (MJPEG)
exists



The JPEG Standard



JPEG is effective because of the followi
ng three observations



Image data usually changes slowly across an image, especially within an 8x8
block



Therefore images contain much redundancy



Experiments indicate that humans are not very sensitive to the high frequency
data images



Therefore we can remo
ve much of this data using transform coding



The JPEG Standard



Humans are much more sensitive to brightness (luminance) information than to color
(chrominance)



JPEG uses chroma subsampling (4:2:0)



The following slide gives an overview of the various steps i
n JPEG compression



JPEG Encoding Overview



JPEG Encoding Overview



The main steps in JPEG encoding are the following



Transform RGB to YUV or YIQ and subsample color



DCT on 8x8 image blocks



Quantization



Zig
-
zag ordering and run
-
length encoding



Entropy coding



DCT on Image Blocks



The image is divided up into 8x8 blocks



2D DCT is performed on each block



The DCT is performed independently for each block



This is why, when a high degree of compression is requested, JPEG gives a
“blocky” image result



Quantization



Quantization in JPEG aims at reducing the total number of bits in the compressed image



Divide each entry in the frequency space block by an integer, then round



Use a quantization matrix
Q
(
u
,
v
)



Quantization



Use larger entries in
Q
for the higher spatial fr
equencies



These are entries to the lower right part of the matrix



The following slide shows the default
Q
(
u
,
v
) values for luminance and
chrominance



Based on psychophysical studies intended to maximize compression ratios
while minimizing perceptual distort
ion



Since after division the entries are smaller, we can use fewer bits to
encode them



Quantization



Quantization



Multiple quantization matrices can be used (perhaps by scaling the defaults), allowing the
user to choose how much compression to use



Trades of
f quality vs. compression ratio



More compression means larger entries in
Q



An example of JPEG coding and decoding on one image block is shown next



Original and DCT coded block



Quantized and Reconstructed Blocks



After IDCT and Difference from Original



Same
steps on a less homogeneous block



Steps 2 and 3



IDCT and Difference



Preparation for Entropy Coding



We have seen two main steps in JPEG coding: DCT and quantization



The remaining steps all lead up to entropy coding of the quantized DCT coefficients



These ad
ditional data compression steps are lossless



Most of the lossiness is in the quantization step



Run
-
Length Coding



We now do run
-
length coding



The AC and DC components are treated differently



Since after quantization we have many 0 AC components, RLC is a go
od idea



Note that most of the zero components are towards the lower right corner (high
spatial frequencies)



To take advantage of this, use zigzag scanning to create a 64
-
vector



Zigzag Scan in JPEG



Run
-
Length Coding



Now the RLC step replaces values in a 64
-
vector (previously an 8x8 block) by a pair
(RUNLENGTH, VALUE), where RUNLENGTH is the number of zeroes in the run and
VALUE is the next non
-
zero value



From the first example we have (32, 6,
-
1,
-
1, 0,
-
1, 0, 0, 0,
-
1, 0, 0, 1, 0, 0, …, 0)



This becomes (0,
6) (0,
-
1)(1,
-
1)(3,
-
1)(2,1)(0,0)
-

Note that DC coefficient is
ignored



Coding of DC Coefficients



Now we handle the DC coefficients



1 DC per block



DC coefficients may vary greatly over the whole image, but slowly from one
block to its neighbor (once again,
zigzag order)



So apply Differential Pulse Code Modulation (DPCM) for the DC coefficients



If the first five DC coefficients are 150, 155, 149, 152, 144, we come up with
DPCM code
-

150, 5,
-
6, 3,
-
8



Entropy Coding



Now we apply entropy coding to the RLC coded

AC coefficients and the DPCM coded
DC coefficients



The baseline entropy coding method uses Huffman coding on images with 8
-
bit
components



DPCM
-
coded DC coefficients are represented by a pair of symbols (SIZE,
AMPLITUDE)



SIZE = number of bits to represent

coefficient



AMPLITUDE = the actual bits



Entropy Coding



The size category for the different possible amplitudes is shown below



DPCM values might require more than 8 bits and might be negative



Entropy Coding



One’s complement is used for negative numbers



Cod
es 150, 5,
-
6, 3,
-
8 become



(8, 10010110), (3, 101), (2, 11), (4, 0111)



Now the SIZE is Huffman coded



Expect lots of small SIZEs



AMPLITUDE is not Huffman coded



Pretty uniform distribution expected, so probably not worth while



Huffman Coding for AC Coeffici
ents



AC coefficients have been RL coded and represented by symbol pairs (RUNLENGTH,
VALUE)



VALUE is really a (SIZE, AMPLITUDE) pair



RUNLENGTH and SIZE are each 4
-
bit values stored in a single byte
-

Symbol1



For runs greater than 15, special code (15, 0) is

used



Symbol2
is the AMPLITUDE



Symbol1
is run
-
length coded,
Symbol 2
is not



JPEG Modes



JPEG supports several different modes



Sequential Mode



Progresssive Mode



Hierarchical Mode



Lossless Mode



Sequential is the default mode



Each image component is encoded in

a single left
-
to
-
right, top
-
to
-
bottom scan



This is the mode we have been describing



Progressive Mode



Progressive mode delivers low
-
quality versions of the image quickly, and then fills in the
details in successive passes



This is useful for web browsers, w
here the image download might take a long time



The user gets an approximate image quickly



Can be done by sending the DC coefficient and a few AC coefficients first



Next send some more (low spatial resolution) AC coefficients, and continue in
this way until

all of the coefficients have been sent



Sequential vs. Progressive



Hierarchical Mode



Hierarchical mode encodes the image at several different resolutions



These resolutions can be transmitted in multiple passes with increased resolution at each
pass



The
process is described in the following slides



Hierarchical Mode



Hierarchical Mode



Hierarchical Mode



JPEG Bitstream



The JPEG hierarchical organization is described in the next slide



Frame is a picture



Scan is a picture component



Segment is a group of blocks



Frame header inlcudes



Bits per pixel



Size of image



Quantization table etc.



Scan header includes



Number of components



Huffman coding tables, etc.



JPEG Bitstream



JPEG2000



JPEG2000 (extension jp2) is the latest series of standards from the JPEG committee



Uses

wavelet technology



Better compression than JPG



Superior lossless compression



Supports large images and images with many components



Region
-
of
-
interest coding



Compound documents



Computer
-
generated imagery



Other improvements over JPG



Region
-
of
-
Interest Codin
g



JBIG



JBIG (Joint Bi
-
Level Image Processing Group) is a standard for coding binary images



Faxes, scanned documents, etc.



These have characteristics different from color/greyscale images which lend
themselves to different coding techniques



JBIG
-

lossless
coding



JBIG2
-

both, lossless and lossy



4.B.1.
(ii)
Differentiate loss less and lossy compression technique and give one example


for each (4)



Lossless compression recreates a compressed file as an identical match to its original
form. All lossless compression uses techniques to break up a file into smaller segments, for
storage or transmission, that get reassembled later. Lossless compress
ion is used for files, such as
applications, that need to be reproduced exactly like the original file. Lossy compression, on the
other hand, eliminates repeated or "unnecessary" pieces of data, as we discussed above. When
such a file is decompressed, you

get the compression software's re
-
interpretation of the original
file.


Lossy compression can't be used to compress anything that needs to be reproduced
exactly
--

it can't just toss out redundant pieces and hope to program will still
work. Instead,
lossy compression will more often be used with data that is open to some level of human
interpretation, such as an image, where the results can be "fudged" the tiniest bit so that files can
get smaller without, in theory, anyone noticing.




4.B.
2
. Write the following symbols and
probability
es of occurrence, encode the


Message “went#” using arithmetic coding algorithms. Compare arithmetic coding


with Huffman coding principles (16)


Symbols: e n t w #



Prob : 0.3 0.3 0.2 0.1 0.1
[Refer Class Notes]



2
.
B.3.

Explain arithmetic
coding with suitable example (16
)


Arithmetic Coding

Objectives



The Arithmetic Coding idea



Huffman vs. Arithmetic Coding



Underflow



Incremental coding



In
teger Arithmetic & Overflow



Advantages vs. Disadvantages



Binary Arithmetic coding



Compressed tree

Model Based Approach




The Model


a way of calculating, in any given context, the distribution of probabilities
for the next input symbol.



The decoder must
have access to the same model & be able to regenerate the same input
string from the encoded string.

The Arithmetic Coding Idea …

The Basic

Algorithm…

1.

We begin with a “current interval" [L; H) initialized to [0; 1).

2.
For each symbol, we perform two s
teps :


(a)

We subdivide the current interval into subintervals, one for each possible alphabet symbol.
The size of a symbol's subinterval is proportional to the estimated probability that the symbol
will be the next symbol in the file, according to the
model of the input.


(b)

We select the subinterval corresponding to the symbol that actually occurs next in the file,
and make it the new current interval.

3.

We output enough bits to distinguish the final current interval from all other possible final
intervals.

The Basic

Algorithm (cont.)



Computation of the subinterval corresponding to the i
th

symbol that occurs:

Encoding…



The message is represented by an interval of real numbers between 0 & 1.



As the message becomes longer, it’s interval becomes smal
ler

A simple example :

encoding the message eaii!

A few remarks..




5

decimal digits seems a lot to encode a message comprising
4

vowels!



Our example ended up by expanding rather than compressing.



Different models will give different entropies…



The best
single
-
character model of the message
eaii!

is the set of symbol frequencies : {
e(O.2) , a(0.2) , i(O.4) , !(0.2) }
-

which gives an entropy of
2.89

decimal digits.

Decoding…



The decoder receives the final

range or a value from that range & finds
symbol such that:

Back to the example…



The decoder gets the final range :


[0.23354,0.2336).



The range lies entirely within the space the model allocate for
e



Huffman’s

method takes a set of probabilities and calculates, for each symbol a code
word that unambiguously represents that symbol.



It is known to give the best possible representation when all of the symbols must be
assigned discrete code words, each an integral
number of bits long,





In an
arithmetic coder
:


-

the
exact

symbol probabilities are


preserved.


-

compression effectiveness is


better.

Two major difficulties…



The shrinking current in
terval requires use of high precision arithmetic.



No output is produced until the entire message has been encoded.

Solution



incremental coding



As the code range narrows, the top bits of L and H become the same.



Any bits that are the same can be transm
itted immediately, since they cannot be affected
by future narrowing.


Example



Suppose the final interval is: ……………….
[0.8125;0.825)



In binary (approximately) : .
[0.11010 00000, 0.11010 01100)



We can uniquely identify this interval by outputting
110100
.

Solution
-


incremental coding(cont.)



To output each leading bit as soon as it is known,




To double the length of the current interval so that it reflects only the unknown part of
the final int
erval.

Underflow!



Reminder:


…we scale the cumulative probabilities into the interval
[L,H)

for each character transmitted..



Suppose
L

and
H

are so close together, that this scaling operation maps some different
symbols of the model onto the same intege
r in the interval…

Underflow (cont.)



If such a symbol actually occurred it would not be possible to continue encoding…



The encoder must guarantee that the interval
[L,H)

is always large enough to prevent this
!

Underflow (cont.)





We don’t yet know the

next output bit, but we do know that the following bit will have
the opposite value , 01 or 10 .

Underflow (cont.)



we keep track of that fact, and expand the current interval symmetrically about

½
.

Underflow (cont.)




What if after this operation it’s still

Underflow (cont.)



We need only count the no. of expansions & follow the next bit by that no. of opposites!

Incremental Coding



.. After the selection of the subinterval corresponding to an input symbol.



Repeat

the following steps:

Incremental Coding (cont.)

2.

If the new subinterval lies entirely within [0; ½ ), we output 0 and any 1s left over from
previous symbols;


then we double the size of the interval [0; ½ ), expanding up.

Incremental Coding (cont.)

3.

If the new subinterval lies entirely within [ ½ ; 1), we output 1 and any 0s left over from
previous symbols;


then we double the size of the interval [½;1), expanding down.

Incremental Coding (cont.)

4.

If the new subinterval lies entirely within
[
¼

;
¾

),


we keep track of this fact for future output;


then we double the size of the interval [ ¼ ; ¾ ),


expanding in both directions away from the midpoint.

Integer Arithmetic.




… When a whole floating number is passed to the decompressor

, no rounding can be
performed ..



The highest precision today’s fpu offers is 80 bits. So we CAN’T work with the whole
number!



Instead we redefine the range [
0
;
1
) to [
0
;
N
), i.e. [
0000h
;
FFFFh
).



We also reduce the probabilities so

we only need 16 bit.



In the subdivision process we select non
-
overlapping intervals (of length at least 1) with
lengths approximately proportional to the probabilities.

Integer Arithmetic

Example



We’ll divide the numbers by the maximum:

Overflow!



Now we consider the possibility of overflow in the integer multiplications.



In every step of the encoding the interval is expanded:

Advantages vs. Disadvantages

Advantages

Flexibility
-

conjunction with models that can provide a sequence o
f event probabilities
(i.e. adaptive models).

Optimality

Disadvantages

Slow

Doesn’t produce a prefix code

(minor) needs to indicate EOF

(minor) poor error resistance

Binary Arithmetic Coding



Till now we discussed about a multi
-
symbol alphabet…



It applies to a binary alphabet as well…



Why distinguish the two cases?


-

both the coder and the interface to the model are simpler for a binary alphabet.



The coding of bilevel images often prod
uces probabilities close to 1, indicating the use of
arithmetic coding to obtain good compression.

Much of the arithmetic coding research by Rissanen, Langdon, and others at IBM has focused on
bilevel images.



This way :


-

the model no longer has to m
aintain and produce cumulative probabilities


-

a single probability suffices to encode each decision.


-

calculating the new current interval is also simplified, since just one endpoint changes
after each decision.



We

now usually have to
encode more than one
event

for
each

input symbol.



we have a new data structure problem, maintaining the coding trees efficiently without
using excessive space.



In Huffman tree the average path length is minimal.



Therefore, the smallest average number of
events per input symbol occurs when the tree
is a Huffman tree.



However, maintaining such trees dynamically is
complicated and slow

..


Compressed Trees



An efficient data structure is needed to map each of n symbols to a sequence of binary
choices..



The
compressed tree

:


A space
-
efficient data structure based on the complete binary tree.



We are free to represent the n
-
symbol alphabet by a complete binary tree, because
arithmetic coding has nearly optimal compression..



The tree can be flattened (line
arized) by breadth
-
first traversal




We can save space by storing only one probability at each internal node.



Suppose we have the following probability distribution for an 8
-
symbol alphabet:

Compressed Trees example
We can represent this distr
ibution by the tree, rounding
probabilities & expressing them as multiples of 0.01.



The linear representation:




4. B.4.
Explain

about Hamming Codes in detail with an Eg.




H
H
i
i
s
s
t
t
o
o
r
r
y
y


I
I
n
n


t
t
h
h
e
e


l
l
a
a
t
t
e
e


1
1
9
9
4
4
0
0


s
s


R
R
i
i
c
c
h
h
a
a
r
r
d
d


H
H
a
a
m
m
m
m
i
i
n
n
g
g


r
r
e
e
c
c
o
o
g
g
n
n
i
i
z
z
e
e
d
d


t
t
h
h
a
a
t
t


t
t
h
h
e
e


f
f
u
u
r
r
t
t
h
h
e
e
r
r


e
e
v
v
o
o
l
l
u
u
t
t
i
i
o
o
n
n


o
o
f
f


c
c
o
o
m
m
p
p
u
u
t
t
e
e
r
r
s
s


r
r
e
e
q
q
u
u
i
i
r
r
e
e
d
d


g
g
r
r
e
e
a
a
t
t
e
e
r
r


r
r
e
e
l
l
i
i
a
a
b
b
i
i
l
l
i
i
t
t
y
y
,
,


i
i
n
n


p
p
a
a
r
r
t
t
i
i
c
c
u
u
l
l
a
a
r
r


t
t
h
h
e
e


a
a
b
b
i
i
l
l
i
i
t
t
y
y


t
t
o
o


n
n
o
o
t
t


o
o
n
n
l
l
y
y


d
d
e
e
t
t
e
e
c
c
t
t


e
e
r
r
r
r
o
o
r
r
s
s
,
,


b
b
u
u
t
t


c
c
o
o
r
r
r
r
e
e
c
c
t
t


t
t
h
h
e
e
m
m
.
.




H
H
i
i
s
s


s
s
e
e
a
a
r
r
c
c
h
h


f
f
o
o
r
r


e
e
r
r
r
r
o
o
r
r
-
-
c
c
o
o
r
r
r
r
e
e
c
c
t
t
i
i
n
n
g
g


c
c
o
o
d
d
e
e
s
s


l
l
e
e
d
d


t
t
o
o


t
t
h
h
e
e


H
H
a
a
m
m
m
m
i
i
n
n
g
g


C
C
o
o
d
d
e
e
s
s
,
,


p
p
e
e
r
r
f
f
e
e
c
c
t
t


1
1
-
-
e
e
r
r
r
r
o
o
r
r


c
c
o
o
r
r
r
r
e
e
c
c
t
t
i
i
n
n
g
g


c
c
o
o
d
d
e
e
s
s
,
,


a
a
n
n
d
d


t
t
h
h
e
e


e
e
x
x
t
t
e
e
n
n
d
d
e
e
d
d


H
H
a
a
m
m
m
m
i
i
n
n
g
g


C
C
o
o
d
d
e
e
s
s
,
,


1
1
-
-
e
e
r
r
r
r
o
o
r
r


c
c
o
o
r
r
r
r
e
e
c
c
t
t
i
i
n
n
g
g


a
a
n
n
d
d


2
2
-
-
e
e
r
r
r
r
o
o
r
r


d
d
e
e
t
t
e
e
c
c
t
t
i
i
n
n
g
g


c
c
o
o
d
d
e
e
s
s
.
.


U
U
s
s
e
e
s
s


H
H
a
a
m
m
m
m
i
i
n
n
g
g


C
C
o
o
d
d
e
e
s
s


a
a
r
r
e
e


s
s
t
t
i
i
l
l
l
l


w
w
i
i
d
d
e
e
l
l
y
y


u
u
s
s
e
e
d
d


i
i
n
n


c
c
o
o
m
m
p
p
u
u
t
t
i
i
n
n
g
g
,
,


t
t
e
e
l
l
e
e
c
c
o
o
m
m
m
m
u
u
n
n
i
i
c
c
a
a
t
t
i
i
o
o
n
n
,
,


a
a
n
n
d
d


o
o
t
t
h
h
e
e
r
r


a
a
p
p
p
p
l
l
i
i
c
c
a
a
t
t
i
i
o
o
n
n
s
s
.
.


H
H
a
a
m
m
m
m
i
i
n
n
g
g


C
C
o
o
d
d
e
e
s
s


a
a
l
l
s
s
o
o


a
a
p
p
p
p
l
l
i
i
e
e
d
d


i
i
n
n


D
D
a
a
t
t
a
a


c
c
o
o
m
m
p
p
r
r
e
e
s
s
s
s
i
i
o
o
n
n


S
S
o
o
m
m
e
e


s
s
o
o
l
l
u
u
t
t
i
i
o
o
n
n
s
s


t
t
o
o


t
t
h
h
e
e


p
p
o
o
p
p
u
u
l
l
a
a
r
r


p
p
u
u
z
z
z
z
l
l
e
e


T
T
h
h
e
e


H
H
a
a
t
t


G
G
a
a
m
m
e
e


B
B
l
l
o
o
c
c
k
k


T
T
u
u
r
r
b
b
o
o


C
C
o
o
d
d
e
e
s
s


A
A


[
[
7
7
,
,
4
4
]
]


b
b
i
i
n
n
a
a
r
r
y
y


H
H
a
a
m
m
m
m
i
i
n
n
g
g


C
C
o
o
d
d
e
e




















L
L
e
e
t
t


o
o
u
u
r
r


c
c
o
o
d
d
e
e
w
w
o
o
r
r
d
d


b
b
e
e


(
(
x
x
1
1


x
x
2
2








x
x
7
7
)
)


ε
ε


F
F
2
2
7
7


x
x
3
3
,
,


x
x
5
5
,
,


x
x
6
6
,
,


x
x
7
7


a
a
r
r
e
e


c
c
h
h
o
o
s
s
e
e
n
n


a
a
c
c
c
c
o
o
r
r
d
d
i
i
n
n
g
g


t
t
o
o


t
t
h
h
e
e


m
m
e
e
s
s
s
s
a
a
g
g
e
e


(
(
p
p
e
e
r
r
h
h
a
a
p
p
s
s


t
t
h
h
e
e


m
m
e
e
s
s
s
s
a
a
g
g
e
e


i
i
t
t
s
s
e
e
l
l
f
f


i
i
s
s


(
(
x
x
3
3


x
x
5
5


x
x
6
6


x
x
7
7


)
)
)
)
.
.


x
x
4
4


:
:
=
=


x
x
5
5


+
+


x
x
6
6


+
+


x
x
7
7


(
(
m
m
o
o
d
d


2
2
)
)


x
x
2
2


:
:
=
=


x
x
3
3


+
+


x
x
6
6


+
+


x
x
7
7


x
x
1
1


:
:
=
=


x
x
3
3


+
+


x
x
5
5


+
+


x
x
7
7






























[
[
7
7
,
,
4
4
]
]


b
b
i
i
n
n
a
a
r
r
y
y


H
H
a
a
m
m
m
m
i
i
n
n
g
g


c
c
o
o
d
d
e
e


w
w
o
o
r
r
d
d
s
s




A
A


[
[
7
7
,
,
4
4
]
]


b
b
i
i
n
n
a
a
r
r
y
y


H
H
a
a
m
m
m
m
i
i
n
n
g
g


C
C
o
o
d
d
e
e




L
L
e
e
t
t


a
a


=
=


x
x
4
4


+
+


x
x
5
5


+
+


x
x
6
6


+
+


x
x
7
7


(
(
=
=
1
1


i
i
f
f
f
f


o
o
n
n
e
e


o
o
f
f


t
t
h
h
e
e
s
s
e
e


b
b
i
i
t
t
s
s


i
i
s
s


i
i
n
n


e
e
r
r
r
r
o
o
r
r
)
)


L
L
e
e
t
t


b
b


=
=


x
x
2
2


+
+


x
x
3
3


+
+


x
x
6
6


+
+


x
x
7
7




L
L
e
e
t
t


c
c


=
=


x
x
1
1


+
+


x
x
3
3


+
+


x
x
5
5


+
+


x
x
7
7




I
I
f
f


t
t
h
h
e
e
r
r
e
e


i
i
s
s


a
a
n
n


e
e
r
r
r
r
o
o
r
r


(
(
a
a
s
s
s
s
u
u
m
m
i
i
n
n
g
g


a
a
t
t


m
m
o
o
s
s
t
t


o
o
n
n
e
e
)
)


t
t
h
h
e
e
n
n


a
a
b
b
c
c


w
w
i
i
l
l
l
l


b
b
e
e


b
b
i
i
n
n
a
a
r
r
y
y


r
r
e
e
p
p
r
r
e
e
s
s
e
e
n
n
t
t
a
a
t
t
i
i
o
o
n
n


o
o
f
f


t
t
h
h
e
e


s
s
u
u
b
b
s
s
c
c
r
r
i
i
p
p
t
t


o
o
f
f


t
t
h
h
e
e


o
o
f
f
f
f
e
e
n
n
d
d
i
i
n
n
g
g


b
b
i
i
t
t
.
.




A
A


[
[
7
7
,
,
4
4
]
]


b
b
i
i
n
n
a
a
r
r
y
y


H
H
a
a
m
m
m
m
i
i
n
n
g
g


C
C
o
o
d
d
e
e


I
I
f
f


(
(
y
y
1
1


y
y
2
2








y
y
7
7
)
)


i
i
s
s


r
r
e
e
c
c
e
e
i
i
v
v
e
e
d
d


a
a
n
n
d
d


a
a
b
b
c
c






0
0
0
0
0
0
,
,


t
t
h
h
e
e
n
n


w
w
e
e


a
a
s
s
s
s
u
u
m
m
e
e


t
t
h
h
e
e


b
b
i
i
t
t


a
a
b
b
c
c


i
i
s
s


i
i
n
n


e
e
r
r
r
r
o
o
r
r


a
a
n
n
d
d


s
s
w
w
i
i
t
t
c
c
h
h


i
i
t
t
.
.


I
I
f
f


a
a
b
b
c
c
=
=
0
0
0
0
0
0
,
,


w
w
e
e


a
a
s
s
s
s
u
u
m
m
e
e


t
t
h
h
e
e
r
r
e
e


w
w
e
e
r
r
e
e


n
n
o
o


e
e
r
r
r
r
o
o
r
r
s
s


(
(
s
s
o
o


i
i
f
f


t
t
h
h
e
e
r
r
e
e


a
a
r
r
e
e


t
t
h
h
r
r
e
e
e
e


o
o
r
r


m
m
o
o
r
r
e
e


e
e
r
r
r
r
o
o
r
r
s
s


w
w
e
e


m
m
a
a
y
y


r
r
e
e
c
c
o
o
v
v
e
e
r
r


t
t
h
h
e
e


w
w
r
r
o
o
n
n
g
g


c
c
o
o
d
d
e
e
w
w
o
o
r
r
d
d
)
)
.
.




D
D
e
e
f
f
i
i
n
n
i
i
t
t
i
i
o
o
n
n
:
:


G
G
e
e
n
n
e
e
r
r
a
a
t
t
o
o
r
r


a
a
n
n
d
d


C
C
h
h
e
e
c
c
k
k


M
M
a
a
t
t
r
r
i
i
c
c
e
e
s
s


F
F
o
o
r
r


a
a
n
n


[
[
n
n
,
,


k
k
]
]


l
l
i
i
n
n
e
e
a
a
r
r


c
c
o
o
d
d
e
e
,
,


t
t
h
h
e
e


g
g
e
e
n
n
e
e
r
r
a
a
t
t
o
o
r
r


m
m
a
a
t
t
r
r
i
i
x
x


i
i
s
s


a
a


k
k
×
×
n
n


m
m
a
a
t
t
r
r
i
i
x
x


f
f
o
o
r
r


w
w
h
h
i
i
c
c
h
h


t
t
h
h
e
e


r
r
o
o
w
w


s
s
p
p
a
a
c
c
e
e


i
i
s
s


t
t
h
h
e
e


g
g
i
i
v
v
e
e
n
n


c
c
o
o
d
d
e
e
.
.


A
A


c
c
h
h
e
e
c
c
k
k


m
m
a
a
t
t
r
r
i
i
x
x


f
f
o
o
r
r


a
a
n
n


[
[
n
n
,
,


k
k
]
]


i
i
s
s


a
a


g
g
e
e
n
n
e
e
r
r
a
a
t
t
o
o
r
r


m
m
a
a
t
t
r
r
i
i
x
x




f
f
o
o
r
r


t
t
h
h
e
e


d
d
u
u
a
a
l
l


c
c
o
o
d
d
e
e
.
.


I
I
n
n


o
o
t
t
h
h
e
e
r
r


w
w
o
o
r
r
d
d
s
s
,
,


a
a
n
n


(
(
n
n
-
-
k
k
)
)
×
×
k
k


m
m
a
a
t
t
r
r
i
i
x
x


M
M


f
f
o
o
r
r


w
w
h
h
i
i
c
c
h
h


M
M
x
x


=
=


0
0


f
f
o
o
r
r


a
a
l
l
l
l


x
x


i
i
n
n


t
t
h
h
e
e


c
c
o
o
d
d
e
e
.
.


A
A


C
C
o
o
n
n
s
s
t
t
r
r
u
u
c
c
t
t
i
i
o
o
n
n


f
f
o
o
r
r


b
b
i
i
n
n
a
a
r
r
y
y


H
H
a
a
m
m
m
m
i
i
n
n
g
g


C
C
o
o
d
d
e
e
s
s


F
F
o
o
r
r


a
a


g
g
i
i
v
v
e
e
n
n


r
r
,
,


f
f
o
o
r
r
m
m


a
a
n
n


r
r


×
×


2
2
r
r
-
-
1
1


m
m
a
a
t
t
r
r
i
i
x
x


M
M
,
,


t
t
h
h
e
e


c
c
o
o
l
l
u
u
m
m
n
n
s
s


o
o
f
f


w
w
h
h
i
i
c
c
h
h


a
a
r
r
e
e


t
t
h
h
e
e


b
b
i
i
n
n
a
a
r
r
y
y


r
r
e
e
p
p
r
r
e
e
s
s
e
e
n
n
t
t
a
a
t
t
i
i
o
o
n
n
s
s


(
(
r
r


b
b
i
i
t
t
s
s


l
l
o
o
n
n
g
g
)
)


o
o
f
f


1
1
,
,




,
,


2
2
r
r
-
-
1
1
.
.




T
T
h
h
e
e


l
l
i
i
n
n
e
e
a
a
r
r


c
c
o
o
d
d
e
e


f
f
o
o
r
r


w
w
h
h
i
i
c
c
h
h


t
t
h
h
i
i
s
s


i
i
s
s


t
t
h
h
e
e


c
c
h
h
e
e
c
c
k
k


m
m
a
a
t
t
r
r
i
i
x
x


i
i
s
s


a
a


[
[
2
2
r
r
-
-
1
1
,
,


2
2
r
r
-
-
1
1






r
r
]
]


b
b
i
i
n
n
a
a
r
r
y
y


H
H
a
a
m
m
m
m
i
i
n
n
g
g


C
C
o
o
d
d
e
e


=
=


{
{
x
x
=
=
(
(
x
x
1
1


x
x
2
2








x
x


n
n
)
)


:
:


M
M
x
x
T
T


=
=


0
0
}
}
.
.






E
E
x
x
a
a
m
m
p
p
l
l
e
e


C
C
h
h
e
e
c
c
k
k


M
M
a
a
t
t
r
r
i
i
x
x


A
A


c
c
h
h
e
e
c
c
k
k


m
m
a
a
t
t
r
r
i
i
x
x


f
f
o
o
r
r


a
a


[
[
7
7
,
,
4
4
]
]


b
b
i
i
n
n
a
a
r
r
y
y


H
H
a
a
m
m
m
m
i
i
n
n
g
g


C
C
o
o
d
d
e
e
:
:




















S
S
y
y
n
n
d
d
r
r
o
o
m
m
e
e


D
D
e
e
c
c
o
o
d
d
i
i
n
n
g
g


L
L
e
e
t
t


y
y


=
=


(
(
y
y
1
1


y
y
2
2








y
y
n
n
)
)


b
b
e
e


a
a


r
r
e
e
c
c
e
e
i
i
v
v
e
e
d
d


c
c
o
o
d
d
e
e
w
w
o
o
r
r
d
d
.
.


T
T
h
h
e
e


s
s
y
y
n
n
d
d
r
r
o
o
m
m
e
e


o
o
f
f


y
y


i
i
s
s


S
S
:
:
=
=
L
L
r
r
y
y
T
T
.
.




I
I
f
f


S
S
=
=
0
0


t
t
h
h
e
e
n
n


t
t
h
h
e
e
r
r
e
e


w
w
a
a
s
s


n
n
o
o


e
e
r
r
r
r
o
o
r
r
.
.




I
I
f
f


S
S






0
0


t
t
h
h
e
e
n
n


S
S


i
i
s
s


t
t
h
h
e
e


b
b
i
i
n
n
a
a
r
r
y
y


r
r
e
e
p
p
r
r
e
e
s
s
e
e
n
n
t
t
a
a
t
t
i
i
o
o
n
n


o
o
f
f


s
s
o
o
m
m
e
e


i
i
n
n
t
t
e
e
g
g
e
e
r
r


1
1






t
t






n
n
=
=
2
2
r
r
-
-
1
1


a
a
n
n
d
d


t
t
h
h
e
e


i
i
n
n
t
t
e
e
n
n
d
d
e
e
d
d


c
c
o
o
d
d
e
e
w
w
o
o
r
r
d
d


i
i
s
s






x
x


=
=


(
(
y
y
1
1






y
y
r
r
+
+
1
1








y
y
n
n
)
)
.
.


E
E
x
x
a
a
m
m
p
p
l
l
e
e


U
U
s
s
i
i
n
n
g
g


L
L
3
3


















S
S
u
u
p
p
p
p
o
o
s
s
e
e


(
(
1
1


0
0


1
1


0
0


0
0


1
1


0
0
)
)


i
i
s
s


r
r
e
e
c
c
e
e
i
i
v
v
e
e
d
d
.
.




















1
1
0
0
0
0


i
i
s
s


4
4


i
i
n
n


b
b
i
i
n
n
a
a
r
r
y
y
,
,


s
s
o
o


t
t
h
h
e
e


i
i
n
n
t
t
e
e
n
n
d
d
e
e
d
d


c
c
o
o
d
d
e
e
w
w
o
o
r
r
d
d


w
w
a
a
s
s






(
(
1
1


0
0


1
1


1
1


0
0


1
1


0
0
)
)
.
.


E
E
x
x
t
t
e
e
n
n
d
d
e
e
d
d


[
[
8
8
,
,
4
4
]
]


b
b
i
i
n
n
a
a
r
r
y
y


H
H
a
a
m
m
m
m
.
.


C
C
o
o
d
d
e
e


A
A
s
s


w
w
i
i
t
t
h
h


t
t
h
h
e
e


[
[
7
7
,
,
4
4
]
]


b
b
i
i
n
n
a
a
r
r
y
y


H
H
a
a
m
m
m
m
i
i
n
n
g
g


C
C
o
o
d
d
e
e
:
:


x
x
3
3
,
,


x
x
5
5
,
,


x
x
6
6
,
,


x
x
7
7


a
a
r
r
e
e


c
c
h
h
o
o
s
s
e
e
n
n


a
a
c
c
c
c
o
o
r
r
d
d
i
i
n
n
g
g


t
t
o
o


t
t
h
h
e
e


m
m
e
e
s
s
s
s
a
a
g
g
e
e
.
.




x
x
4
4


:
:
=
=


x
x
5
5


+
+


x
x
6
6


+
+


x
x
7
7




x
x
2
2


:
:
=
=


x
x
3
3


+
+


x
x
6
6


+
+


x
x
7
7




x
x
1
1


:
:
=
=


x
x
3
3


+
+


x
x
5
5


+
+


x
x
7
7




A
A
d
d
d
d


a
a


n
n
e
e
w
w


b
b
i
i
t
t


x
x
0
0


s
s
u
u
c
c
h
h


t
t
h
h
a
a
t
t




x
x
0
0


=
=


x
x
1
1


+
+


x
x
2
2


+
+


x
x
3
3


+
+


x
x
4
4


+
+


x
x
5
5


+
+


x
x
6
6


+
+


x
x
7
7


.
.


i
i
.
.
e
e
.
.
,
,


t
t
h
h
e
e


n
n
e
e
w
w


b
b
i
i
t
t


m
m
a
a
k
k
e
e
s
s


t
t
h
h
e
e


s
s
u
u
m
m


o
o
f
f


a
a
l
l
l
l


t
t
h
h
e
e


b
b
i
i
t
t
s
s


z
z
e
e
r
r
o
o
.
.




x
x
0
0


i
i
s
s


c
c
a
a
l
l
l
l
e
e
d
d


a
a


p
p
a
a
r
r
i
i
t
t
y
y


c
c
h
h
e
e
c
c
k
k
.
.


E
E
x
x
t
t
e
e
n
n
d
d
e
e
d
d


b
b
i
i
n
n
a
a
r
r
y
y


H
H
a
a
m
m
m
m
i
i
n
n
g
g


C
C
o
o
d
d
e
e


T
T
h
h
e
e


m
m
i
i
n
n
i
i
m
m
u
u
m
m


d
d
i
i
s
s
t
t
a
a
n
n
c
c
e
e


b
b
e
e
t
t
w
w
e
e
e
e
n
n


a
a
n
n
y
y


t
t
w
w
o
o


c
c
o
o
d
d
e
e
w
w
o
o
r
r
d
d
s
s


i
i
s
s


n
n
o
o
w
w


4
4
,
,


s
s
o
o


a
a
n
n


e
e
x
x
t
t
e
e
n
n
d
d
e
e
d
d


H
H
a
a
m
m
m
m
i
i
n
n
g
g


C
C
o
o
d
d
e
e


i
i
s
s


a
a


1
1
-
-
e
e
r
r
r
r
o
o
r
r


c
c
o
o
r
r
r
r
e
e
c
c
t
t
i
i
n
n
g
g


a
a
n
n
d
d


2
2
-
-
e
e
r
r
r
r
o
o
r
r


d
d
e
e
t
t
e
e
c
c
t
t
i
i
n
n
g
g


c
c
o
o
d
d
e
e
.
.


T
T
h
h
e
e


g
g
e
e
n
n
e
e
r
r
a
a
l
l


c
c
o
o
n
n
s
s
t
t
r
r
u
u
c
c
t
t
i
i
o
o
n
n


o
o
f
f


a
a


[
[
2
2
r
r
,
,


2
2
r
r
-
-
1
1


-
-


r
r
]
]


e
e
x
x
t
t
e
e
n
n
d
d
e
e
d
d


c
c
o
o
d
d
e
e


f
f
r
r
o
o
m
m


a
a


[
[
2
2
r
r




1
1
,
,


2
2
r
r




1
1






r
r
]
]


b
b
i
i
n
n
a
a
r
r
y
y


H
H
a
a
m
m
m
m
i
i
n
n
g
g


C
C
o
o
d
d
e
e


i
i
s
s


t
t
h
h
e
e


s
s
a
a
m
m
e
e
:
:


a
a
d
d
d
d


a
a


p
p
a
a
r
r
i
i
t
t
y
y


c
c
h
h
e
e
c
c
k
k


b
b
i
i
t
t
.
.


C
C
h
h
e
e
c
c
k
k


M
M
a
a
t
t
r
r
i
i
x
x


C
C
o
o
n
n
s
s
t
t
r
r
u
u
c
c
t
t
i
i
o
o
n
n


o
o
f
f


E
E
x
x
t
t
e
e
n
n
d
d
e
e
d
d


H
H
a
a
m
m
m
m
i
i
n
n
g
g


C
C
o
o
d
d
e
e


T
T
h
h
e
e


c
c
h
h
e
e
c
c
k
k


m
m
a
a
t
t
r
r
i
i
x
x


o
o
f
f


a
a
n
n


e
e
x
x
t
t
e
e
n
n
d
d
e
e
d
d


H
H
a
a
m
m
m
m
i
i
n
n
g
g


C
C
o
o
d
d
e
e


c
c
a
a
n
n


b
b
e
e


c
c
o
o
n
n
s
s
t
t
r
r
u
u
c
c
t
t
e
e
d
d


f
f
r
r
o
o
m
m


t
t
h
h
e
e


c
c
h
h
e
e
c
c
k
k


m
m
a
a
t
t
r
r
i
i
x
x


o
o
f
f


a
a


H
H
a
a
m
m
m
m
i
i
n
n
g
g


c
c
o
o
d
d
e
e


b
b
y
y


a
a
d
d
d
d
i
i
n
n
g
g


a
a


z
z
e
e
r
r
o
o


c
c
o
o
l
l
u
u
m
m
n
n


o
o
n
n


t
t
h
h
e
e


l
l
e
e
f
f
t
t


a
a
n
n
d
d


a
a


r
r
o
o
w
w


o
o
f
f


1
1


s
s


t
t
o
o


t
t
h
h
e
e


b
b
o
o
t
t
t
t
o
o
m
m
.
.






P
P
e
e
r
r
f
f
e
e
c
c
t
t


1
1
-
-
e
e
r
r
r
r
o
o
r
r


c
c
o
o
r
r
r
r
e
e
c
c
t
t
i
i
n
n
g
g


H
H
a
a
m
m
m
m
i
i
n
n
g
g


C
C
o
o
d
d
e
e
s
s


a
a
r
r
e
e


p
p
e
e
r
r
f
f
e
e
c
c
t
t


1
1
-
-
e
e
r
r
r
r
o
o
r
r


c
c
o
o
r
r
r
r
e
e
c
c
t
t
i
i
n
n
g
g


c
c
o
o
d
d
e
e
s
s
.
.




T
T
h
h
a
a
t
t


i
i
s
s
,
,


a
a
n
n
y
y


r
r
e
e
c
c
e
e
i
i
v
v
e
e
d
d


w
w
o
o
r
r
d
d


w
w
i
i
t
t
h
h


a
a
t
t


m
m
o
o
s
s
t
t


o
o
n
n
e
e


e
e
r
r
r
r
o
o
r
r


w
w
i
i
l
l
l
l


b
b
e
e


d
d
e
e
c
c
o
o
d
d
e
e
d
d


c
c
o
o
r
r
r
r
e
e
c
c
t
t
l
l
y
y


a
a
n
n
d
d


t
t
h
h
e
e


c
c
o
o
d
d
e
e


h
h
a
a
s
s


t
t
h
h
e
e


s
s
m
m
a
a
l
l
l
l
e
e
s
s
t
t


p
p
o
o
s
s
s
s
i
i
b
b
l
l
e
e


s
s
i
i
z
z
e
e


o
o
f
f


a
a
n
n
y
y


c
c
o
o
d
d
e
e


t
t
h
h
a
a
t
t


d
d
o
o
e
e
s
s


t
t
h
h
i
i
s
s
.
.


F
F
o
o
r
r


a
a


g
g
i
i
v
v
e
e
n
n


r
r
,
,


a
a
n
n
y
y


p
p
e
e
r
r
f
f
e
e
c
c
t
t


1
1
-
-
e
e
r
r
r
r
o
o
r
r


c
c
o
o
r
r
r
r
e
e
c
c
t
t
i
i
n
n
g
g


l
l
i
i
n
n
e
e
a
a
r
r


c
c
o
o
d
d
e
e


o
o
f
f


l
l
e
e
n
n
g
g
t
t
h
h


n
n
=
=
2
2
r
r
-
-
1
1


a
a
n
n
d
d


d
d
i
i
m
m
e
e
n
n
s
s
i
i
o
o
n
n


n
n
-
-
r
r


i
i
s
s


a
a


H
H
a
a
m
m
m
m
i
i
n
n
g
g


C
C
o
o
d
d
e
e
.
.


P
P
r
r
o
o
o
o
f
f
:
:


1
1
-
-
e
e
r
r
r
r
o
o
r
r


c
c
o
o
r
r
r
r
e
e
c
c
t
t
i
i
n
n
g
g


A
A


c
c
o
o
d
d
e
e


w
w
i
i
l
l
l
l


b
b
e
e


1
1
-
-
e
e
r
r
r
r
o
o
r
r


c
c
o
o
r
r
r
r
e
e
c
c
t
t
i
i
n
n
g
g


i
i
f
f




S
S
p
p
h
h
e
e
r
r
e
e
s
s


o
o
f
f


r
r
a
a
d
d
i
i
u
u
s
s


1
1


c
c
e
e
n
n
t
t
e
e
r
r
e
e
d
d


a
a
t
t


c
c
o
o
d
d
e
e


w
w
o
o
r
r
d
d
s
s


c
c
o
o
v
v
e
e
r
r


t
t
h
h
e
e


c
c
o
o
d
d
e
e


s
s
p
p
a
a
c
c
e
e
,
,


a
a
n
n
d
d


i
i
f
f


t
t
h
h
e
e


m
m
i
i
n
n
i
i
m
m
u
u
m
m


d
d
i
i
s
s
t
t
a
a
n
n
c
c
e
e


b
b
e
e
t
t
w
w
e
e
e
e
n
n


a
a
n
n
y
y


t
t
w
w
o
o


c
c
o
o
d
d
e
e
w
w
o
o
r
r
d
d


s
s






3
3
,
,


s
s
i
i
n
n
c
c
e
e


t
t
h
h
e
e
n
n


s
s
p
p
h
h
e
e
r
r
e
e
s
s


o
o
f
f


r
r
a
a
d
d
i
i
u
u
s
s


1
1


c
c
e
e
n
n
t
t
e
e
r
r
e
e
d
d


a
a
t
t


c
c
o
o
d
d
e
e


w
w
o
o
r
r
d
d
s
s


w
w
i
i
l
l
l
l


b
b
e
e


d
d
i
i
s
s
j
j
o
o
i
i
n
n
t
t
.
.


P
P
r
r
o
o
o
o
f
f
:
:


1
1
-
-
e
e
r
r
r
r
o
o
r
r


c
c
o
o
r
r
r
r
e
e
c
c
t
t
i
i
n
n
g
g


S
S
u
u
p
p
p
p
o
o
s
s
e
e


c
c
o
o
d
d
e
e


w
w
o
o
r
r
d
d
s
s


x
x
,
,


y
y


d
d
i
i
f
f
f
f
e
e
r
r


b
b
y
y


1
1


b
b
i
i
t
t
.
.




T
T
h
h
e
e
n
n




x
x
-
-
y
y


i
i
s
s


a
a


c
c
o
o
d
d
e
e
w
w
o
o
r
r
d
d


o
o
f
f


w
w
e
e
i
i
g
g
h
h
t
t


1
1
,
,


a
a
n
n
d
d


M
M
(
(
x
x
-
-
y
y
)
)






0
0
.
.






A
A
p
p
p
p
l
l
i
i
c
c
a
a
t
t
i
i
o
o
n
n
s
s






D
D
a
a
t
t
a
a


c
c
o
o
m
m
p
p
r
r
e
e
s
s
s
s
i
i
o
o
n
n
.
.






















T
T
u
u
r
r
b
b
o
o


C
C
o
o
d
d
e
e
s
s


T
T
h
h
e
e


H
H
a
a
t
t


G
G
a
a
m
m
e
e


D
D
a
a
t
t
a
a


C
C
o
o
m
m
p
p
r
r
e
e
s
s
s
s
i
i
o
o
n
n


H
H
a
a
m
m
m
m
i
i
n
n
g
g


C
C
o
o
d
d
e
e
s
s


c
c
a
a
n
n


b
b
e
e


u
u
s
s
e
e
d
d


f
f
o
o
r
r


a
a


f
f
o
o
r
r
m
m


o
o
f
f


l
l
o
o
s
s
s
s
y
y


c
c
o
o
m
m
p
p
r
r
e
e
s
s
s
s
i
i
o
o
n
n
.
.


I
I
f
f


n
n
=
=
2
2
r
r
-
-
1
1


f
f
o
o
r
r


s
s
o
o
m
m
e
e


r
r
,
,


t
t
h
h
e
e
n
n


a
a
n
n
y
y


n
n
-
-
t
t
u
u
p
p
l
l
e
e


o
o
f
f


b
b
i
i
t
t
s
s


x
x


i
i
s
s


w
w
i
i
t
t
h
h
i
i
n
n


d
d
i
i
s
s
t
t
a
a
n
n
c
c
e
e


a
a
t
t


m
m
o
o
s
s
t
t


1
1


f
f
r
r
o
o
m
m


a
a


H
H
a
a
m
m
m
m
i
i
n
n
g
g


c
c
o
o
d
d
e
e
w
w
o
o
r
r
d
d


c
c
.
.




L
L
e
e
t
t


G
G


b
b
e
e


a
a


g
g
e
e
n
n
e
e
r
r
a
a
t
t
o
o
r
r


m
m
a
a
t
t
r
r
i
i
x
x


f
f
o
o
r
r


t
t
h
h
e
e


H
H
a
a
m
m
m
m
i
i
n
n
g
g


C
C
o
o
d
d
e
e
,
,


a
a
n
n
d
d


m
m
G
G
=
=
c
c
.
.






F
F
o
o
r
r


c
c
o
o
m
m
p
p
r
r
e
e
s
s
s
s
i
i
o
o
n
n
,
,


s
s
t
t
o
o
r
r
e
e


x
x


a
a
s
s


m
m
.
.




F
F
o
o
r
r


d
d
e
e
c
c
o
o
m
m
p
p
r
r
e
e
s
s
s
s
i
i
o
o
n
n
,
,


d
d
e
e
c
c
o
o
d
d
e
e


m
m


a
a
s
s


c
c
.
.




T
T
h
h
i
i
s
s


s
s
a
a
v
v
e
e
s
s


r
r


b
b
i
i
t
t
s
s


o
o
f
f


s
s
p
p
a
a
c
c
e
e


b
b
u
u
t
t


c
c
o
o
r
r
r
r
u
u
p
p
t
t
s
s


(
(
a
a
t
t


m
m
o
o
s
s
t
t
)
)


1
1


b
b
i
i
t
t
.
.


T
T
h
h
e
e


H
H
a
a
t
t


G
G
a
a
m
m
e
e


A
A


g
g
r
r
o
o
u
u
p
p


o
o
f
f


n
n


p
p
l
l
a
a
y
y
e
e
r
r
s
s


e
e
n
n
t
t
e
e
r
r


a
a


r
r
o
o
o
o
m
m


w
w
h
h
e
e
r
r
e
e
u
u
p
p
o
o
n
n


t
t
h
h
e
e
y
y


e
e
a
a
c
c
h
h


r
r
e
e
c
c
e
e
i
i
v
v
e
e


a
a


h
h
a
a
t
t
.
.




E
E
a
a
c
c
h
h


p
p
l
l
a
a
y
y
e
e
r
r


c
c
a
a
n
n


s
s
e
e
e
e


e
e
v
v
e
e
r
r
y
y
o
o
n
n
e
e


e
e
l
l
s
s
e
e


s
s


h
h
a
a
t
t


b
b
u
u
t
t


n
n
o
o
t
t


h
h
i
i
s
s


o
o
w
w
n
n
.
.






T
T
h
h
e
e


p
p
l
l
a
a
y
y
e
e
r
r
s
s


m
m
u
u
s
s
t
t


e
e
a
a
c
c
h
h


s
s
i
i
m
m
u
u
l
l
t
t
a
a
n
n
e
e
o
o
u
u
s
s
l
l
y
y


g
g
u
u
e
e
s
s
s
s


a
a


h
h
a
a
t
t


c
c
o
o
l
l
o
o
r
r
,
,


o
o
r
r


p
p
a
a
s
s
s
s
.
.


T
T
h
h
e
e


g
g
r
r
o
o
u
u
p
p


l
l
o
o
s
s
e
e
s
s


i
i
f
f


a
a
n
n
y
y


p
p
l
l
a
a
y
y
e
e
r
r


g
g
u
u
e
e
s
s
s
s
e
e
s
s


t
t
h
h
e
e


w
w
r
r
o
o
n
n
g
g


h
h
a
a
t
t


c
c
o
o
l
l
o
o
r
r


o
o
r
r


i
i
f
f


e
e
v
v
e
e
r
r
y
y


p
p
l
l
a
a
y
y
e
e
r
r


p
p
a
a
s
s
s
s
e
e
s
s
.
.


P
P
l
l
a
a
y
y
e
e
r
r
s
s


a
a
r
r
e
e


n
n
o
o
t
t


n
n
e
e
c
c
e
e
s
s
s
s
a
a
r
r
i
i
l
l
y
y


a
a
n
n
o
o
n
n
y
y
m
m
o
o
u
u
s
s
,
,


t
t
h
h
e
e
y
y


c
c
a
a
n
n


b
b
e
e


n
n
u
u
m
m
b
b
e
e
r
r
e
e
d
d
.
.


T
T
h
h
e
e


H
H
a
a
t
t


G
G
a
a
m
m
e
e


A
A
s
s
s
s
i
i
g
g
n
n
m
m
e
e
n
n
t
t


o
o
f
f


h
h
a
a
t
t
s
s


i
i
s
s


a
a
s
s
s
s
u
u
m
m
e
e
d
d


t
t
o
o


b
b
e
e


r
r
a
a
n
n
d
d
o
o
m
m
.
.


T
T
h
h
e
e


p
p
l
l
a
a
y
y
e
e
r
r
s
s


c
c
a
a
n
n


m
m
e
e
e
e
t
t


b
b
e
e
f
f
o
o
r
r
e
e
h
h
a
a
n
n
d
d


t
t
o
o


d
d
e
e
v
v
i
i
s
s
e
e


a
a


s
s
t
t
r
r
a
a
t
t
e
e
g
g
y
y
.
.


T
T
h
h
e
e


g
g
o
o
a
a
l
l


i
i
s
s


t
t
o
o


d
d
e
e
v
v
i
i
s
s
e
e


t
t
h
h
e
e


s
s
t
t
r
r
a
a
t
t
e
e
g
g
y
y


t
t
h
h
a
a
t
t


g
g
i
i
v
v
e
e
s
s


t
t
h
h
e
e


h
h
i
i
g
g
h
h
e
e
s
s
t
t


p
p
r
r
o
o
b
b
a
a
b
b
i
i
l
l
i
i
t
t
y
y


o
o
f
f


w
w
i
i
n
n
n
n
i
i
n
n
g
g
.
.















UNIT V



5.B.
1


Explain the
Convolutional

codes (16
)

Convolutional Codes


Representation and Encoding


Many known codes can be modified by an extra code symbol or by


deleting a symbol


* Can create codes of almost any desired rate


* Can create codes with slightly improved performance


The
resulting code can usually be decoded with only a slight


modification to the decoder algorithm.


Sometimes modification process can be applied multiple times in


succession

Modification to Known Codes

1.

Puncturing: delete a parity symbol



(n,k) code



(n
-
1,k) code

2.

Shortening: delete a message symbol



(n,k) code


(n
-
1,k
-
1) code

3.

Expurgating: delete some subset of codewords



(n,k) code


(n,k
-
1) code

4.

Extending: add an additional parity symbol



(n,k) code


(n+1,k) code

Modification to Known
Codes…

5.

Lengthening: add an additional message symbol

(n,k) code


(n+1,k+1) code

6.

Augmenting: add a subset of additional code words

(n,k) code


(n,k+1) code



Interleaving


We have
assumed
so far that bit errors are independent from
one bit

to
the next





In mobile radio,

fading

makes
bursts of error

likely.


Interleaving is used to try to make these errors independent again

Concatenated Codes


Two levels of coding


Achieves performance of very long code rates

while maintaining


shorter decoding complexity


Overall rate is

product of individual code rates

Codeword error occurs
if both codes fail
.

Error probability is found by first evaluating the error probability of

“inner” decoder and
then evaluating
the error probability of “outer” decoder.


Interleaving is always used with concatenated coding

Uses a concatenated RS code




Both codes constructed over GF(256) (8
-
bits/symbol)




Outer code is a (28,24) shortened RS code




Inner code is a (32,28) extended RS code



In between coders is a (28,4) cross
-
interleaver




Overall code rate is r = 0.75



Most commercial CD players don’t exploit full power of the error correction
coder



Inner code rate is ½, constraint length 7 convolu
tinal encoder



Outer Code (255,223) RS code over GF(256)


corrects any burst errors from
convolutional codes




Overall Code Rate is r= 0.437




A block interleaver held 2RS Code words




Deep space channel is severely energy limited but not bandwidth limited



5. B.2.Explain

the
sequential search

& IS
-
95 CDMA

detail



IS
-
95 CDMA




The IS
-
95 standard employs the rate (64,6) orthogonal (Walsh) code on the reverse link



The inner Walsh Code is concatenated with a rate 1/3, constraint length 9 convolutional

code



Viterbi Algorithm

for Convolutional Code



Convolutional Encoder


A Convolutional code is specified by three parameters (n,k,K) or (k/n,K) where



Rc=k/n is the rate efficiency, determining the number of data bits per coded bit.



K

is the size of the s
hift register.



Constraint length = n*K, i.e. the effect of each bit have its influence on n*K bits.



Convolutional Encoder (2,1,3)



Effective code rate :

L is the number of data bits and
k=1

is assumed:



Trellis Diagram



Trellis diagram is an extension of the

state diagram that shows the passage of time.



Maximum likelihood



If the input sequence messages are equally likely, the optimum decoder which minimizes
the probability of error is the
Maximum likelihood

decoder.



Choose the path with maximum metric among
all the paths in the trellis. This path is the
“closest” path to the transmitted sequence.

1.

Choose the path with minimum Hamming distance from the received sequence.

2.

Choose the path which with minimum Euclidean distance to the received sequence.



Viterbi A
lgorithm



The Viterbi algorithm performs Maximum likelihood decoding.



It find a path through trellis with the largest metric (minimum Hamming
distance/minimum Euclidean distance ).



At each step in the trellis, it compares the metric of all paths entering
each state,
and keeps only the path with the largest metric (minimum Hamming distance)
together with its metric. The selected path is known as survivor path.



It proceeds in the trellis by eliminating the least likely paths.



Label all the branches in the trellis with their corresponding branch metric.



For each state in the trellis at the time t
i

which is denoted by Si(i=0,1,2,3),
compute a parameter
Γ
(S
i
, t
i
).



Set
Γ
(S
i
, t
i
)=0 for i=2



At time t
i

, compute the partial path met
rics for all the paths entering each state.



Set
Γ
(S
i
, t
i
) equal to the best partial path metric entering each state at time t
i
.



Keep the survivor path and delete the dead paths from the trellis.



Software Implementation



Add


Compare


Select Computation



Problems on Viterbi Algorithm



Computational complexity increases exponentially with constraint length.



The usually used Hamming distance in VA is sub
-
optimum and therefore lose some
performance.

5. B.3.Explain

about Turbo Codes in detail





Turbo Codes History


IEEE International Comm conf 1993 in GenevaBerrou, Glavieux. : ‘Near Shannon
Limit Error
-
Correcting Coding: Turbo codes’ provided virtually error free communication
at data date/power efficiencies beyond most ex
perts though

Turbo codes 30 years ago.


Forney



Nonsystematic



Nonrecursive combination of conv. Encoders

Berrou et al at 1993




Recursive Systematic



Based on pseudo random



Works better for high rates or high level of noise



Return to zero sequences

Turbo Encoder

Turbo codes

Parallel concatenated



The k
-
bit block is encoded N times with different versions (order)



Pro the sequence remains RTZ is 1/2
Nv



Randomness with 2 encoders; error pro of 10
-
5



Permutations are to fix d
min


Recursive Systematic
Coders

Return to zero sequences

Non recursive encoder state goes to zero after v ‘0’.

RSC goes to zero with P= 1/2
v


if one wants to transform conv. into block code; it is automatically built in.

Initial state i will repeat after encoding k

Convolution

Encoders

Turbo Decoding

Turbo Decoding

Criterion



For n probabilistic processors working together to estimate common symbols, all of
them should agree on the symbols with the probabilities as a single decoder could do

The inputs to the decoders are the Log

likelihood ratio (LLR) for the individual symbol d.

LLR value for the symbol d is defined ( Berrou) as


Turbo Decoder

The SISO decoder reevaluates the LLR utilizing the local Y1 and Y2 redundancies to
improve the confidence

Turbo Decoding

Assume



U
i

: modulating bit {0,1}



Y
i

: received bit, output of a correlator. Can take any value (soft).




Turbo Decoder input is the log likelihood ratio



R(u
i
) = log [ P(Y
i
|U
i
=1)/(P(Y
i
|U
i
=0)]



For BPSK, R(u
i
) =2 Yi/ (var)
2



For each data bit, calculate the LLR given th
at a sequence of bit were sent

Turbo Decoding