Sunteți pe pagina 1din 7

4.

(a) What is the distinction between lossy and lossless


data compression?
Lossless preserves data undergoing compression, Lossy
compression aims to
obtain the best possible fidelity for a given bit-rate or minimizing the
bit-rate to
achieve a given fidelity measure but will not produce a complete
facsimile of the
.original data

10_ Briefly describe the four basic types of data redundancy


that data compression
algorithms can apply to audio, image and video signals.
4 Types of Compression:
Temporal -- in 1D data, 1D signals, Audio etc.
Spatial -- correlation between neighbouring pixels or data items
Spectral -- correlation between colour or luminescence
components. This uses
the frequency domain to exploit relationships between frequency of
change in
data.
Psycho-visual, psycho-acoustic -- exploit perceptual properties of
the human
.visual system or aural system to compress data

3. (a) What is the distinction between lossy and lossless


data compression?
Lossless Compression
Where data is compressed and can be reconstituted (uncompressed)
without loss of
detail or information. These are referred to as bit-preserving or
reversible
compression systems also.
Lossy Compression
where the aim is to obtain the best possible fidelity for a given bitrate or
minimizing the bit-rate to achieve a given fidelity measure. Video
and audio
compression techniques are most suited to this form of
.compression

1. (a) What is meant by the terms Multimedia and Hypermedia? Distinguish

between these two concepts.


Multimedia ---- An Application which uses a collection of multiple media sources
e.g. text, graphics, images, sound/audio, animation and/or video.
Hypermedia --- An application which uses associative relationships among
information contained within multiple media data for the purpose of facilitating
access to, and manipulation of, the information encapsulated by the data.
(b) What is meant by the terms static media and dynamic media? Give two
examples of each type of media.
Static Media does not change over time, e.g. text, graphics
Dynamic Media --- Time dependent (Temporal), e.g. Video, sound, animation.

2. (a) Why is file or data compression necessary for Multimedia activities?


Multimedia files are very large therefore for storage, file transfer etc. file sizes need to
be
reduced often as part of the file format. Text and other files may also be
.encoded/compressed for email and other applications

17_ (a) What is MIDI? How is a basic MIDI message structured?


MIDI: a protocol that enables computer, synthesizers, keyboards, and other
musical or (even) multimedia devices to communicate with each other.
MIDI MESSAGE:
MIDI message includes a status byte and up to two data bytes.
Status byte
The most significant bit of status byte is set to 1.
The 4 low-order bits identify which channel it belongs to (four bits produce 16
possible channels).
The 3 remaining bits identify the message.
.The most significant bit of data byte is set to 0
18_ In what ways can MIDI be used effectively in Multimedia Applications, as
opposed to strictly musical applications?
Many Application:
Low Bandwidth/(Low Quality?) Music on Web, Quicktime etc supports
Midi musical instrument set
Sound Effectts --- Low Bandwidth alternative to audio samples, Sound Set
part of GM soundset
Control of external devices --- e.g Synchronistaion of Video and Audio
(SMPTE), Midi System Exclusive, AUDIO RECORDERS, SAMPLERS
Control of synthesis --- envelope control etc
MPEG 4 Compression control --- see Part (c)
Digital Audio

19_ How can MIDI be used with modern data compression techniques?
Briefly describe how such compression techniques may be implemented?
We have seen the need for compression already in Digital Audio -- Large Data
Files
Basic Ideas of compression (see next Chapter) used as integral part of audio format
-MP3, real audio etc.
Mpeg-4 audio -- actually combines compression synthesis and midi to have a
massive
impact on compression.
Midi, Synthesis encode what note to play and how to play it with a small number
of
parameters -- Much greater reduction than simply having some encoded bits of audio.
Responsibility to create audio delegated to generation side.
MPEG-4 comprises of 6 Structured Audio tools are:
SAOL the Structured Audio Orchestra Language
SASL the Structured Audio Score Language
SASBF the Structured Audio Sample Bank Format
a set of MIDI semantics which describes how to control SAOL with MIDI
a scheduler which describes how to take the above parts and create sound
the AudioBIFS part of BIFS, which lets you make audio soundtracks in MPEG-4
using a variety of tools and effects-processing techniques
MIDI IS the control language for the synthesis part:
As well as controlling synthesis with SASL scripts, it can be controlled with MIDI
files and scores in MPEG-4. MIDI is today's most commonly used representation for
music score data, and many sophisticated authoring tools (such as sequencers) work
with MIDI.
The MIDI syntax is external to the MPEG-4 Structured Audio standard; only
references to the MIDI Manufacturers Association's definition in the standard. But in
order to make the MIDI controls work right in the MPEG context, some semantics
(what the instructions "mean") have been redefined in MPEG-4. The new semantics
.are carefully defined as part of the MPEG-4 specification

1. (a) Give a definition of multimedia and a multimedia system.


Multimedia is the field concerned with the computer-controlled integration of
text, graphics, drawings, still and moving images (Video), animation, audio, and
any other media where every type of information can be represented, stored,
transmitted and processed digitally.
A Multimedia System is a system capable of processing multimedia data and
.applications

(b) What are the key distinctions between multimedia data and more conventional
types of media?
Multimedia systems deal with the generation, manipulation, storage, presentation,
and communication of information in digital form.

The data may be in a variety of formats: text, graphics, images, audio, video.
A majority of this data is large and the different media may need synchronisation - the data may have temporal relationships as an integral property.
Some media is time independent or static or discrete media: normal data, text,
single images, graphics are examples.
Video, animation and audio are examples of continuous media

13_ What key issues or problems does a multimedia system have to deal with when
handling multimedia data?
A Multimedia system has four basic characteristics:
Multimedia systems must be computer controlled.
Multimedia systems are integrated.
The information they handle must be represented digitally.
The interface to the final presentation of media is usually interactive.
CM0340
3
Multimedia systems may have to render a variety of media at the same instant -- a
distinction from normal applications. There is a temporal relationship between many
forms of media (e.g. Video and Audio. There 2 are forms of problems here
Sequencing within the media -- playing frames in correct order/time frame in
video
Synchronisation -- inter-media scheduling (e.g. Video and Audio). Lip
synchronisation is clearly important for humans to watch playback of video
and audio and even animation and audio. Ever tried watching an out of (lip)
sync film for a long time?
The key issues multimedia systems need to deal with here are:
How to represent and store temporal information.
How to strictly maintain the temporal relationships on play back/retrieval
What process are involved in the above.
Data has to represented digitally so many initial source of data needs to be digitise -translated from analog source to digital representation. The will involve scanning
(graphics, still images), sampling (audio/video) although digital cameras now exist for
direct scene to digital capture of images and video.
The data is large several Mb easily for audio and video -- therefore storage, transfer
(bandwidth) and processing overheads are high. Data compression techniques very
.common

9_ List three distinct models of colour used in Multimedia.


Explain why there
are a number of different colour models exploited in
multimedia data formats.
Possible models:
RGB
CIE Chromaticity
YIQ Colour Space

YUV (YCrCb)
CMY/CMYK
Different models reflect need to represent colour in a perceptually
relevant model
for effective compression.
Different models also due to evolution of colour from Video
(YIQ,YUV), Display
.and Print (CMYK) media requirements (RGB)

Give one example of a compression algorithm for each class.


EXAMPLES:
Temporal Any Audio/Video compression method, Zero length
suppression,
pattern substitution, Pulse Code Modulation (a few variants), MPEG
Audio,
MPEG Video, H.264 [1]
Spatial Any Image/Video compression algorithm, GIF, JPEG, MPEG,
H.264.
[1]
Spectral JPEG, MPEG, H.264. [1]
Psycho-visual MPEG audio, MPEG Video, JPEG (colour conversion).
[1]
(b) What advantage does arithmetic coding offer over
Huffman coding for data compression?
Huffman coding assumes an integer number (k) of bits for each
symbol hence k
is never less than 1 [1]
Arithmetic coding can represent fractional number of bits and can
achieve better
compression ratios. [1]
2_ Briefly state an algorithm for arithmetic decoding.
SEEN In LECTURE Coding:
The idea behind arithmetic coding is
_ To have a probability line, 01, and
_ Assign to every symbol a range in this line based on its probability,
2
CM0340 SOLNS
_ Order in terms of probability highest first.
_ Note: The higher the probability, the higher range which assigns to
it.
For each symbol in the sequence assign a code based in symbols
probability and
then subdivide for all the symbols:
range = high - low;

high = low + range * high_range of the symbol being coded;


low = low + range * low_range of the symbol being coded;

Decoding is the opposite so need to work out (unseen in lectures)


For current code value:
_ look up in table and assign symbol [1]
_ Eliminate symbol effect by subtracting the low value in the range
and divide
by range [2]
_ Repeat above two steps until zero reached see last part of
problem. [2]
Total 5 marksUnseen. (Coding algorithm discussed in lectures,
decoding
simply mentioned as the reverse process)
(c) Why are I-frames inserted into the compressed output
stream relatively frequently?
Differences between frames get too large large errors hard to
track fast
blocks etc. So need to restart card with a new I-frame.
3_ What is the key difference between I-Frames, P-Frames
and B-Frames?
I-Frame Basic Reference frame for each Group of pictures
essentially a
JPEG Compressed image. [1]
P-Frame Coded forward Difference frame w.r.t last I or P frame [1]
B-Frame Coded backward Difference frame w.r.t last I or P frame
[1]

Q3. (a) What is the difference between reverb and echo?


Echo implies a distinct, delayed version of a sound, [1]
Reverb each delayed sound wave arrives in such a short period of
time such
that we do not perceive each reflection as a copy of the original
sound. [1]
Q3. (a) What are critical bands in relation to the human
ears perception of sound?
Frequncy masking occurs in human ear where certain frequencies
are masked by
neighbouring frequencies. Range of closeness for frequency
masking depends
on the frequencies and relative amplitudes. Each band where
frequencies are
masked is called the Critical Band

(b) How does MPEG audio compression achieve critical band


approximation?
Create Critical Band Filter Banks [1]
Fourier Transform Audio Data [1]
Bandpass filter data according to know Critical Bandwidths[1]

8_ Briefly explain how the human visual system senses


colour. How is
colour exploited in the compression of multimedia graphics,
images and
video?
The eye is basically just a biological camera
Eye through lens etc focused light onto the Retina (back of eye)
Retina consists of neurons that fire nerve signals proportional to
incident light
Each neuron is either a rod or a cone. Rods are not sensitive to
colour.
Cones organized in banks that sense red green and blue
Multimedia Context:
Since Rods do not sense colour only sense luminosity intensity Eye
is more
sensitve to luminence than colour. Also Eye is more sensitive to red
and green and
blue (this is due to evolution need to see fellow humans where blue
is not
prevalent in skin hues)
So any Multimedia compression techniques should use colour
representation that
presents colour in a way that models Human Visual system. We can
then encode
luminence is high bandwith (more bits) than colour as this is much
more
perceptually relevant.

S-ar putea să vă placă și