Documente Academic
Documente Profesional
Documente Cultură
Fundamentals
28
The physical meaning of the image is
determined by the source of the image.
0<f(x,y)<infinite
Function f(x,y) may be characterized by two
components
Illumination component- Amount of source
illumination incident on the scene
Reflectance component- Amount of illumination
reflected by the objects in the scene-
f(x, y)
Gray Scale
The intensity of a monochrome image at any coordinate
(xi, yi) is called the
gray level (l) of the image at that point
l = f(xi, yi)
Lmin ≤ l ≤ Lmax
By Repeating this process line by line from top of the image can
generate a 2D digital image.
Image Sampling and Quantization
• Sampling: digitizing the 2-dimensional spatial coordinate values
• Quantization: digitizing the amplitude values (brightness level)
b=MxNxk
If M = N, then
b = N2 k
17/2/2011 38
k-bit image
An image having 2k gray levels
An image having 256 gray levels is called a 8-bit image
Saturation level is the highest value beyond which all the intensity
levels are clipped. Saturated area has a high contrast intensity level.
17/2/2011 44
(a) & (b)
virtually impossible
to tell apart. The
level of detail lost
is too fine.
(c)
V slight fine
checkerboard pattern in
the borders & some
graininess throughout
the image
17/2/2011 45
Checkerboard Effect
When the no. of pixels in an image is reduced keeping
the no. of gray levels in the image constant, fine
checkerboard patterns are found at the edges of the
image. This effect is called the checker board effect.
False Contouring
When the no. of gray-levels in the image is low, the
foreground details of the image merge with the
background details of the image, causing ridge like
structures. This degradation phenomenon is known as
false contouring.
(The ridges resemble topographic contours in a map)
17/2/2011 46
Varying Gray-level ; Fixed number of samples
17/2/2011 47
False contouring
17/2/2011 48
2.5 Basic Relationships b/w pixels
•Neighbors of a pixel
•Adjacency
•Connectivity
•Regions
•Boundary
There are three kinds of neighbors of a pixel:
N4(p) 4-neighbors: the set of horizontal and vertical neighbors
O O O O O O
O X O X O X O
O O O O O O
17/2/2011 50
N4(p) ND(p) N8(p)
17/2/2011 51
Two pixels that are neighbors and have the same grey-
level (or some other specified similarity criterion) are
adjacent
II) 8-adjacency
Two pixels p & q with values from V are
8-adjacent if q is in the set N8(p).
17/2/2011 53
III) m-adjacency (mixed adjacency)
Two pixels p & q with values from V are
m-adjacent if
a) q is in the set N4(p), or
b) q is in ND(p) and the set N4(p) ∩ N4(q) has
no pixels whose values are from V.
17/2/2011 54
Find 8-adjacency & m-adjacency of
the pixel in the centre.
Note: V = {1}
17/2/2011 55
V = {1}
Fig (b) shows the ambiguity in 8-adjacency
17/2/2011 56
From pixel p with coordinates (x0, y0)
to pixel q with coordinates (xn, yn)
(x0, y0), (x1, y1), …, (xn, yn)
Where pixels (xi, yi) & (xi-1, yi-1) are adjacent for
1≤i≤n
17/2/2011 61
Boundary
The boundary (aka border or contour) of a region R is
the set of pixels in R that have one or more neighbours
that are not in R.
0 0 0 0 0
0 1 1 0 0
Is it a boundary pixel?
0 1 1 0 0
0 1 1 1 0
0 1 1 1 0
0 0 0 0 0
17/2/2011 62
Boundary
another definition
0 0 0 0 0
0 0 1 0 0
0 0 1 0 0
0 0 1 0 0
0 0 1 0 0
17/2/2011 0 0 0 0 0 63
Edge vs Boundary
Boundary of a region
forms a closed path
“Global Concept”
•D(p,q)=D(q,p)
•D(p,z)<=D(p,q) + D(q,z)
Pixel Coordinate
p (x, y)
q (s, t)
D4(p, q) = |x – s| + |y – t|
17/2/2011
D8(p, q) = max(|x – s| , |y – t|) 66
What will be the shape of the figure which consists of all
pixels which are at a distance of ≤ r from pixel (x, y), when the
distance measure is
• Euclidean ?
• City block ?
• Chessboard ?
2 2 2 2 2 2
• Disk
2 1 2 2 1 1 1 2
• Diamond
2 1 0 1 2 2 1 0 1 2
• Square
2 1 2 2 1 1 1 2
2 2 2 2 2 2
17/2/2011 67
D4 and D8 distances are independent of
any path.
17/2/2011 70
Array vs Matrix Operations
17/2/2011 71
Linear vs Nonlinear Operations
17/2/2011 72
Is the sum operator, Σ, linear?
17/2/2011 73
Is the max operator, whose function is to find the
maximum value of the pixels in an image, linear?
17/2/2011 74
Arithmetic Operations
Subtraction enhances
the differences between
the two images
17/2/2011 76
Mask Mode Radiography
One of the most successful commercial applications of image
subtraction
17/2/2011 77
(a) Mask image
(b) Live Image taken after injection of iodine into
bloodstream
(c) Difference b/w image (a) and image (b)
(d) Image after enhancing the contrast of image (c)
clearly shows the propagation of medium through
17/2/2011
• A noisy image:
g ( x, y ) f ( x, y ) n ( x , y )
• Averaging M different noisy images:
M
1
g ( x, y )
M
g ( x, y )
i 1
i
17/2/2011 80
Shading Correction
An important applications of image division
17/2/2011 83
Points to note ...
17/2/2011 85
Logical Operations
Note: Here
Black represents
binary 0s
White represents
binary 1s
17/2/2011 86
Geometric Spatial
Transformations
17/2/2011 88
Translation
• Translate (a,b): (x,y) (x+a,y+b)
17/2/2011 89
(x, y) (x’, y’)
where (x’, y’) = (x+h, y+k)
Then, the relationship between (x, y) and (x', y') can be put into a
matrix form
17/2/2011 90
17/2/2011 91
17/2/2011 92
If a point (x, y) is rotated an angle a about
the coordinate origin to become a new point
(x', y'), the relationships can be described as
follows:
17/2/2011 93
Translation & Rotation
can be combined
17/2/2011 95
Scaling can be applied to all axes, each with a
different scaling factor. For example, if the x-, y-
and z-axis are scaled with scaling factors p, q
and r, respectively, the transformation matrix is:
17/2/2011 96
17/2/2011 97
How far a direction is pushed is determined by a
shearing factor. On the xy-plane, one can push in the
x-direction, positive or negative, and keep the y-
direction unchanged. Or, one can push in the y-
direction and keep the x-direction fixed. The
following is a shear transformation in the x-direction
with shearing factor a:
17/2/2011 98
The shear transformation in the y-
direction with shearing factor b is
17/2/2011 99
Transformations relocate pixels
on an image to new locations
17/2/2011 100
Image Interpolation
Interpolation works by using known data to estimate values at
unknown points.
17/2/2011 101
If you had an additional measurement at 11:30AM, you could see
that the bulk of the temperature rise occurred before noon
Resampled image
Zoomed image
17/2/2011 103
Forward Mapping
Scan the original image, f, & at each location (x, y) compute the spatial
location (x’, y’) in the transformed image, g. Then,
g(x’, y’) = f(x, y)
Shrunk image
17/2/2011 104
Problems with forward mapping:
Zoomed image
17/2/2011 Original image 105
Image Expanded to Larger Dimension
17/2/2011 106
Inverse Mapping
17/2/2011 107
Steps to calculate gray levels in the output image, g(x’, y’)
1. For each pixel location (x’, y’) in the output image, g, find the
pixel location (x, y) in the input image by applying reverse
transformation.
2. (x, y) corresponds to a definite location in the original image
f.
3. If both x & y are integers, then the intensity value at
f(x, y) is directly substituted at g(x’, y’).
Else, the intensity value at non-integer coordinate (x, y) of
the input image f is calculated by interpolating the intensities
at the neighbouring pixels & then transferring it to g(x’, y’).
17/2/2011 108
Types of Interpolation Algorithms
17/2/2011 109
Nearest Neighbour Interpolation
17/2/2011 110
450% increases in size of this 106 x 40 crop from an image
17/2/2011 111
Bilinear Interpolation
17/2/2011 112
The averaging has an anti-aliasing effect and therefore produces
relatively smooth edges with hardly any jaggies.
Bilinear Interpolation
Back
17/2/2011 113
Bicubic Interpolation
Considers the closest 4x4 neighborhood of known pixels — for a
total of 16 pixels.
Since these are at various distances from the unknown pixel,
closer pixels are given a higher weighing in the calculation.
It is perhaps the ideal combination of processing time and
output quality.
For this reason it is a standard in many image editing programs
(including Adobe Photoshop), printer drivers and in-camera
interpolation.
17/2/2011 114
Bicubic produces noticeably sharper images than the previous
two methods. Notice the smoother eyelashes.
Bilinear Image
17/2/2011 115
Image Registration
Image registration is used to align two or more
images of same scene.
Here both input & output images are known.
17/2/2011 118
Example
CT highlights specific
features in the image of a
human organ
17/2/2011 119
Satellite image registration
The shift of cloud cluster in the II image wrt the I one provides
the wind velocity
Registration is also
A process of mapping between a
temporal sequence of image frames
17/2/2011 120
Establishing a
correspondence
Matching of identical
shapes in the related image
pair
Geometric transformation
of one image into another
17/2/2011 121
The mapping between the related image
pairs is achieved using
Geometric
Transformation
Image registration is the task of
applying some transformations to
two images so that they match as
best as possible.
17/2/2011 122
In many image processing
applications, it is necessary to
form a pixel-by-pixel
comparison of two images of
the same object field obtained
from different sensors, or from
same sensor but at different
times
Oct 5, 2008 123
For this comparison, it is necessary
to spatially register the images &
thereby, to correct for relative
translational shifts, rotational
differences, scale differences &
even perspective view differences