Sunteți pe pagina 1din 3



Part A :
1. Name the elements of Digital Image Processing Systems
2. Why does a Digital Image Processing System need specialized hardware & software?
3. How does the image is captured using single sensor?
4. How does acquisition affect the display in the elements of digital image processing systems?
5. Why is archival important in storage of images?
6. What is the significance of gray level resolution?
7. What is meant by spatial resolution?
8. What is meant by nearest neighbor interpolation?
9. How image shrinking can be done?
10.Why should sampling and quantization be done for an image?
11.Differentiate between sampling and quantization
12.What is the requirement that governs choice of on-line storage or archival storage for image
13.What is the need for studying the basic relationships between pixels?
14.What is the significance of pixel replication?
15.Differentiate between 4-point adjacency and 8-point adjacency.
16.Define connectivity of pixels.
17.What is a Transform? Why do we use Transforms in image processing?
18.Name the properties of the Cosine Transform
19.Give the mathematical representation of the Sine Transform pair of one dimensional
20.Name the properties of the Sine Transform

Part B:
1. Discuss any three elements of digital image processing systems in detail
2. Explain Sampling and Quantization with a simple example.
3. Discuss about image acquisition in detail.
4. Explain the following
a. Representation of Digital Images
b. Spatial and Gray-level Resolution
5. Explain the discrete Fourier transform and explain any four properties of the DFT.
6. Discuss the properties of the 2D DFT
7. Discuss the properties of the Cosine Transform
8. Discuss the properties of the Sine Transform

Part A :
1. What is a mask?
2. How can an image negative be obtained?
3. What is the difference between contrast stretching and compression of dynamic range?
4. What is a histogram?
5. What is meant by histogram equalization?
6. How can histogram equalization be applied locally?
7. In local Histogram processing, why are non-overlapping regions used?
8. What is meant by histogram matching or histogram specification?
9. How can noise reduction be accomplished using image averaging?
10.What is the effect of averaging with reference to detail in an image?
11.What is image restoration?
12.What is inverse filtering

Part B:
1. What is image enhancement? Explain Contrast stretching and compression of dynamic
2. Explain histogram equalization and histogram specification. How can they be applied for local
3. Discuss the Wiener Filter method for restoration of images.
4. What is constrained least squares restoration? Explain
5. Explain spatial restoration.
6. Explain
a. Log transformation
b. Power law transformation

Unit III.
Part A :
1. What is the significance of segmentation?
2. What is meant by gray level discontinuity?
3. What is meant by isolated point?
4. Define point detection.
5. What is the significance of line detection?
6. How image acquisition imperfections affect edges?
7. What is meant by image thresholding?
8. Define background point.
9. What is meant by region growing?
10. What is the significance of quad tree?
11. What is meant by catchment basin?
12. List the drawbacks of chain codes
13. What is the significance of boundary segments?
14. List the segmentation techniques

Part B:
1. Explain edge detection.
2. Explain global processing via hough transform
3. Explain thresholding
4. Explain region based segmentation
5. Explain polygonal approximation
6. Explain
a. Signatures
b. Skeletons

Unit IV.
Part A :
1. What is a pattern?
2. What is a pattern classifier?
3. What are optimal statistical classifiers?
4. Give the mathematical form of the Bayes decision function.
5. What are artificial neural networks?
6. What is a multilayer feedforward neural network?
7. What is meant by “training” in artificial neural network
8. Define tree grammars.

Part B:
1. Explain (a) minimum distance classifier (b) matching by correlation.
2. What are (a) optimal statistical classifiers (b) Bayes classifier for Gaussian pattern classes?
3. Explain perceptron for two pattern classes.
4. Explain multilayer feedforward networks in pattern recognition.
5. Explain
i. Matching shape numbers
ii. String matching

Unit V.
Part A :
1. What is the need for image compression?
2. What is coding redundancy?
3. What is interpixel redundancy?
4. What is psychovisual redundancy?
5. What is meant by fidelity criteria?
6. What is run length coding?
7. Differentiate between lossy compression and lossless compression methods.
8. What is meant by wavelet coding?
9. What is jpeg?
10.Differentiate between jpeg and jpeg2000 standards.

Part B:
1. Explain any two basic data redundancies in digital image compression.
2. Explain variable length coding and Huffman coding.
3. Explain arithmetic coding and LZW coding.
4. Explain (a) one-dimensional run length coding (b) two-dimensional run length coding.
5. Explain (a) lossy predictive coding (b) transform coding.
6. Explain wavelet based image compression.