Sunteți pe pagina 1din 139

ROBOT VISION

Lecture Notes

로봇 비젼
강의 노트

Course No. 15678


Credits Hours 3

Instructor:
Professor
Choi, Tae-Sun
Course outline:

The principles of the machine/robot vision are introduced. It covers image formation,
pattern classification, motion and optical effect for object recognition. In addition, the
design technology of the robot vision system with optical device is studied.

Prerequisite: Digital Signal Processing, Image Processing.

Textbook and References: 1. Robot Vision, B. K. P. Horn, MIT Press.


2. Computer Vision, Dana Ballard and Christopher
Brown, Prentice Hall.
Course Schedule
1. Image Formation & Image Sensing
2. Binary Images: Geometrical Properties
3. Binary Images: Topological Properties
4. Regions & Image Segmentation
5. Image Processing: Continuous Images
6. Image Processing: Discrete Images
7. Edge & Edge Finding
8. Lightness & Color
9. Reflectance Map: Photometric Stereo
10. Reflectance: Shape from Shading
11. Motion Field & Optical Flow
12. Photogrammetry & Stereo
13. Pattern Classification
14. Polyhedral Objects
15. Extended Gaussian Images
16. Passive Navigation & Structure from Motion
1. Introduction

I. Human vision system

Figure 1—1

• Rod : more sensitive to light • Cone : shorter & thicker in structure

Scotopic vision Photopic vision : R.G.B

Slender receptor 6.5 million

• Fovea : Density of cones is greatest

The region of sharpest photopic vision Camera visual system

1
Figure 1—2

i. Photometric information ii. Geometric information

brightness : (ex: bright green, dark green) shape

color (ex, R. G. B)

Figure 1—3

2
NTSC : US standard for TV picture and audio coding and transmission

(timing calculation)

For one frame : 512×512 = 256k Byte

For RGB : 3

For real time : 30 frames/sec

256k × 3 × 30 = 22.5M Byte/sec

180 M bit/sec

180 Mbps compressio


T1 1.544 Mbps

180
Compression ratio : ≈ 120
1.544

Wavelet compression

MPEG I, II, IV

Compression JPEG

Fractal

H.261

II. Robot vision

RV

Computer graphics

Image processing

2
Figure 1—4

Figure 1—5

3
“Scene description”

Under constrained problems : Inverse problem

for example 2 equations

3 unknown variables are given

1. Image analysis :a detailed but undigested description

2. Scene analysis : more parsimonious, structured descriptions suitable for

decision marking

The apparent brightness of a surface depends on three factors

1. Microstructure

2. Distribution of incident light

3. Orientation of the surface w. r. t light source & observer

4
Figure 1—6

“Lambertian Surface”

One that appears equally bright from all viewing directions and reflects all incident light,

absorbing none

5
2. Image Formation & Image sensing

What determines where(Shape-Geometric) the image of some point will appear?

What determines how(Brightness, Color – Photonetiz) bright the image of some surface

will be?

I. Two Aspects of Image Formation

i. Perspective projection

Figure 2—1

6
ii. Orthographic Projection

f' δ x '2 + δ y '2


= = m (magnification)
z0 δ x2 + δ y 2

In case the depth range of a scene is small relative to the distance z0

Figure 2—2

 x' = mx  x' = x
 ⇒ 
 y ' = my if m=1  y' = y

=> This orthographic projection can be modeled by rays parallel to the optical axis

(rather than one passing through the origin)

(a) (b)

Figure 2—3

Where δP is the power of the radiant energy falling on the

δp
E= [watt/m]
δI

7
II. Brightness

1. Image Brightness : Irradiance E = δ P [watt / m 2 ]


δI

2. Scene Brightness :

δ 2P 2
L= L = [watt / m 2 ] where δ P is the power emitted by the infinitesimal surface patch
δ Iδ W

of area (δI) into an infinitesimal solid angle (δw)

A
SolidAngle ≡ , where A is area, and D is Distance
D2

Ex 1 Hemisphere

4π R 2 1
Solid angle = ⋅ 2 = 2π
2 R
δw Whole sphere = 4π
R

Figure 2—4

Ex 2 Small patch

8
Figure 2—5

Lens

Figure 2—6

Image Irradiance Scene Radiance


δP δP
E= L=
δI δ Oδ w '
δ I cos α δ 0 cos θ
Solid angle = Solid angle =
( f / cos α )2 ( z / cos α )2
δL
Transfer Function: f (i) =
δE

9
f(·) : BRDF(Bidirectional Reflectance Distribution Function) in Ch 10

Ex3) Sphere/Radius 2m

Figure 2—7

III. Lenses

Figure 2—8

i. Lens formula

1 1 1
= + 2–1
f z' z

s − z' z'
= 2–2
2R D

Solve (2-1) & (2-2) to calculate R – blur circle radius

10
DS 1 1 1
R= ( − − )
2 f s z

R = 0 at focused point P′

ii. Depth of field (DOF)

The range of distances over which objects are focused “sufficiently well” in the sense

that thediameter of the blur circle is less than the resolution of the imaging device.

The larger the lens aperture => the less the DOF

Figure 2—9

iii. Vignetting

In a simple lens, all the rays that enter the front surface of the lens end up being focused in the image

Figure 2—10

11
In a compound lens, some of the rays that pass through the first lens may be occluded

by portions of the second lens, and so on … Vignetting is a reduction in light-gathering

power with increasing inclination of light rays with respect to the optical axis

Figure 2—11

iv. Aberration

Points on the optical axis may be quite well focused, which those in a corner of the

image are smeared out

12
IV. Image sensing

i. Color sensing

a) Human eye

Figure 2—12

b) CCD sense

R, G, B filter

R(λ) : 1 ( at Red color)

13
0 (otherwise)

G(λ) : 1 (at Green color)

0 (otherwise)

B(λ) : 1 (at Blue color)

0 (otherwise)

→ UV filter filtered out UV ray

ii. Randomness and Noise

a) Random variable [R.V]

Consider an experiment H with sample description space Ω the elements on points of

Ω,δ, are the random out comes of H

If to every δ, we assign a real number X(δ) The real establish a correspondence rule

between δ and R the real time, such a rule subject to certain constraints is called a

random variable

14
Figure 2—13

R.V.

 Probability density function (p.d.f) p(x)


Where p(x) ≥ 0 for all x, therefore: ∫ p( x)dx = 1
−∞


µ= ∫ xp( x)dx
mean : −∞ : first moment

∞ ∞
δ2 = ∫ ( x − µ ) p( x)dx = ∫x
2 2
p( x)dx : second moment
variance : −∞ −∞

 Cumulative probability distribution (CDF) P(x)

x
P( x) = ∫ p( x)dt
−∞

X = X1 + X 2
  
Two R.V. R .V . S

P(X1) & P(X2) what P(x) = ?

 Solution

15
Figure 2—14

 Given x 2 , x1 : x-x2

P1 ( x1 ) ⋅ δ x1 = P1 ( x − x2 )δ x

 Now, x2 can take on a range of values P2(x2).δ(x2)

 To find the prob. that x lies between x and x+δx,



P( x) ⋅ δ x = ∫ P ( x − x )δ xP ( x )dx
−∞
1 2 2 2 2


∴ P( x) = ∫ P ( x − x ) P ( x )dx
−∞
1 2 2 2 2


=∫ ∫ P ( x − t ) P (t )dt
1 2
−∞

= P1 ∗ P2
N
1
For multiple RV's : x=
N
∑x
i =1
i



Mean value : N
Nσ 2 σ 2
Variance : =
N2 N
σ
Standard Deviation : ⇒
N

b) Gaussian (Normal)

2
1  x−µ 
1 − 
σ 
 p.d.f : p ( x) = e 2
2πσ

16
c) Poisson (m>0)

mn
 p.d.f : p ( k ) = e− m ⋅
n!

where n arrivals in a time interval T for some m

mean = m variance = σ2

Ex Let X and Y be independent r.v.'s with px ( x) = e − x u ( x ) and

1
p y ( y ) = [u ( y + 1) − u ( y − 1)] and Let Z ≡ X + Y , what is the p.d.f of Z?
2

(a) (b)

Figure 2—15

17

p z ( z ) = px ( x ) * p y ( y ) = ∫ p ( z − y) p ( y)dy
−∞
x y

(a) z < −1 ; pz ( z ) = 0
z
1
(b) −1 ≤ z < 1 ; pz ( z ) = ∫ e− ( z − y ) dy
2 −1
1
1 −( z− y )
2 −∫1
(c) z ≥1 ; pz ( z ) = e dy

Figure 2—16

18
3. Binary images : Geometric properties

I. Binary Images

Figure 3—1

x y
1. Area of the object + = 1 ⇒ x sin θ − y cos θ + ρ = 0
ρ ρ

sin θ cos θ

A = ∫ ∫ I b( x, y )dxdy ; zero-th Moment

2. Position of the object : center of area

1
x=
A ∫ ∫ xb( x, y)dxdy : First moment

1
y=
A ∫ ∫ b( x, y)dxdy

3. Orientation of the object

(ρ,θ)

E = ∫ ∫ r 2 b( x, y)dxdy : second moment


I

19
II. Simple Geometrical Properties

Figure 3—2

r 2 = ( x − x0 )2 + ( y − y0 )2
= ( x − (− ρ sin θ + s cos θ )) 2 + ( y − ( ρ cos θ + s sin θ ))2
= ( x 2 + y 2 ) + ρ 2 + 2 ρ ( x sin θ − y cos θ ) − 2 s( x cos θ + y sin θ ) + s 2

To find the shortest distance from (x, y) to (x0, y0) differentiate with respect to S and set

equal to zero

d (r 2 )
=0
ds

S = x cosθ + y sin θ

x − x0 = x (sin 2 θ + cos 2 θ ) + ρ sin θ − ( x cos θ + y sin θ ) cos θ


= x sin 2 θ + ρ sin θ − y sin θ cos θ
= sin θ (sin θ − y cos θ + ρ )

y − y 0 = − cosθ ( x sin θ − y cosθ + ρ )

∴ r 2 = ( x − x0 ) 2 + ( y − y 0 ) 2

= ( x sin θ − y cosθ + ρ ) 2

20
If, r=0 , x sin θ − y cosθ + ρ = 0;

At ( x, y ), x sin θ − y cosθ + ρ = 0;

∫∫ r b( x, y)dxdy
2
Finally, E=

= ∫ ∫ ( x sin θ − y cos θ + ρ ) 2 b( x, y) dxdy


I

Differentiating w.r.t ρ and setting the result to zero lead to

dE
d ρ ∫ ∫I
= 2( x sin θ − y cos θ + ρ )b( x, y )dxdy = 0

A( x sin θ − y cosθ + ρ ) = 0

Let x' = x − x ⇒ x = x − x'

y' = y − y

( x − x' ) sin θ − ( y − y ' ) cosθ + ρ ) = 0


x sin θ − y cosθ + ρ = x' sin θ − y ' cosθ

and so

∫ ∫ ( x 'sin θ − y 'cos θ ) b( x, y)dxdy


2
I

= ∫ ∫ x ' sin θ b( x, y )dxdy + ∫ ∫ (−2 x ' y ') cos θ sin θ


2 2
I I

b( x, y )dxdy + ∫ ∫ y '2 cos 2 θ b( x, y )dxdy


I

= a sin θ − b sin θ cos θ + c cos 2 θ


2

where,

21
a = ∫ ∫ ( x ')2 b ( x, y )dx ' dy '
I

b = 2 ∫ ∫ ( x ' y ') ⋅ b( x, y )dx ' dy '


I

c = ∫ ∫ ( y ')2 b( x, y )dx ' dy '


I

Now

1 1 1
E= (a + c) − (a − c) cos 2θ − b sin 2θ
2 2 2

Differentiating w.r.t θ and setting the result to zero, we have

b
tan 2 θ =
a−c

unless b=0 ; a=c

 1 −1 b
θ = tan
∴ 2 a−c
 ρ = − x sin θ + y cosθ

III. Projections

Figure 3—3

22
h( x) = ∫ b( x, y )dy, V ( x) = ∫ b ( x, y )dx

A = ∫ ∫ b( x, y )dxdy = ∫ h( x) dx
Area :
= ∫ v( y )dy

1 1
x=
A ∫ ∫ xb( x, y )dxdy = ∫ xh( x)dx
A
Position :
1
y = ∫ yv( y )dy
A
1 b
Orientation : θ= tan −1
2 a−c

( ρ ,θ ) ρ = − x sin θ + y cosθ

∫ ∫ x b( x, y)dxdy = ∫ x h( x)dx
2 2
I

∫∫ I
y 2b( x, y )dxdy = ∫ y 2v ( x)dy

1 2 1
∫∫ I xyb( x, y )dxdy = ∫ t 2 d (t )dt −
2 ∫ x v( x) − ∫ y 2 h( y )dy
2

 Diagonal projection (θ=45°)

Figure 3—4

23
Suppose that θ is 45°

1 1
d (t ) = ∫ b( x, y )dxdy = ∫ b( (t − s ), (t + s )ds
2 2

Now consider that

1
∫∫ I2
( x + y )2 b( x, y )dxdy = ∫ ∫ t 2 d (t )dt
I

1 1
= ∫ ∫ ( x 2 + xy + y 2 )b( x, y )dxdy
I 2 2
1 1
∫ ∫ xyb( x, y)dxdy = ∫ t d (t )dt − 2 ∫ x h( x)dx − 2 ∫ y v( y )dy
2 2 2
so,
I

IV. Discrete Binary Images

(reference of book figure 3-8)

N M
area A = ∑∑ bij
i =1 j =1

1
i= ∑∑ ibij
A i j
position (i , j )
1
j= ∑∑ jbij
A i j

∑∑ i b
i j
2
ij

orientation ( ρ ,θ ) ∑∑ j
i j
2
bij

∑∑ ijb
i j
ij

24
V. Run-Length coding

Figure 3—5

Where rik is the k-th run of the i-th line and the first run in each row is a run of zeros

mi
n n
2 mi
A = ∑∑ ri , 2 k = ∑ ri 2 + ri 4 + ri 6 + ..........ri
i =1 k =1 i =1 2

position (center of area) : (i, j )

25
mi
2
hi = ∑ ri , 2 k
k =1

1 n 1 n
i= ∑
A i =1
i ⋅ hi .......... j = ∑ jv j
A i=1

vj

1. Find the first horizontal differences of the image data

Figure 3—6

2. The first difference of the vertical projection can be computed from the

projection of the first

26
horizontal differences

⇒ count the number of circles subtracted by the number of triangles.

3. The vertical projection vj can be found by summing the result from left to right

Orientation (ρ ,θ )

∑i
i
2
hi

∑j
j
2
vj

1 1
∑∑ ijb
i j
ij = ∑ t 2dt −
t

2 i
i 2 h j − ∑ j 2v j
2 j

1
Where, t= (i + j )
2

dt can be obtained in a way similar to that used to obtain the vertical projection

27
4. Binary Image : Topological properties

 Jordan curve theorem

A simple closed curve separate the image into two simply connected regions

Figure 4—1

4 Connectedness – only edge-adjacent cells are considered neighbors.

Figure 4—2

28
Ex 1

Figure 4—3

4 objects

` 2 backgrounds

No closed curve ⇒ contradiction(by Jordan curve theorem)

8 connectedness – Corner-adjacent cells are considered neighbors, too.

Figure 4—4

6 connectedness

29
Figure 4—5

Figure 4—6

Figure 4—7

i. A sequential Labeling Algorithm

 Recursive Labeling

1. Choose a point where bij=1 and assign a label to this point and into neighbors

30
Figure 4—8

2. Next, label all the neighbors of these neighbors ⇒ one component will have

been labeled completely

3. Find new places to start a labeling operation whenever an unlabeled point is

found where

bij=1

4. Try every cell in this scan

 Sequential Labeling

31
Figure 4—9

1. if A is zero, there is nothing to do.

Figure 4—10

2. if A is one, if D has been labeled, simply copy that label and move on if not [if

one B orc is labeled, copy that label else, choose a new label for A]

(go to step 2)

32
II. Local counting and iterative modification

i. Local counting

Figure 4—11

Horizontal N

Vertical N Total Length = 2N

33
Figure 4—12

The overestimated rate : 2 N : 2 N = 2 : 1

4
Considering all slopes overestimated average ratio :
π
4
Overestimated average rate :
π

 -Euler number-

1. No. of Bodies – No. of Holes

2. No. of upstream convexities – No. of upstream concavities

convexities(+1) concavities(-1)

Figure 4—13

34
Figure 4—14

Body hole

1. body - hole : E = 1- 2 = -1 2 - 0 = 2 1 - 0 = 1 1 - 1 = 0 1 - 0 = 1 1 - 0 = 1

2. convexity - concavity : 1- 2 = -1 2– 0= 2 1–0 = 1 1– 1=0 1– 0=1 1- 0= 1

ii. The additive set property

Figure 4—15

35
⇒This permits to split an image up into smaller pieces and obtain an overall answer by

combining the

results of1(body) –
operations performed on the pieces

Figure 4—16

1 2 2 2 1

E (a ∪ b ∪ c ∪ d ∪ e) = E (a ) + E (b) + E (c) + E (d ) + E (e)

−[ E (a ∩ b ) + E (b ∩ c) + E (c ∩ d ) + E (d ∩ e)]

1-0 = 1 2-0 =2 3-0 =3 2-0 =2

= (1 + 2 + 2 + 2 + 1) − (1 + 2 + 3 + 2) = 0

right (body = 1, hole 1) 1-1 =0

convexities =2

36
concavities =2

Ex1) hand

convexit

concavit

Figure 4—17

Body : 1 Hole : 0 1-0 =1

Convexities : 5 , Concavities : 4 5-4 =1

Euler number = 1

iii. Iterative modification

Instead of adding up the outputs of Local operators in the previous section, the new

binary image determined by the iterative method can be used as input to another cycle

of computation this process, called iterative modification is useful because it allows us

37
to incrementally changes an image that is difficult to process into one that might

succumb to the methods already discussed

Euler Differential E*

2. E* = - ( 1 – 1 = 0 ) + ( 1- 0 ) = 1

Part pattern Current pattern

38
4

*
E new = −E *

39
5. Regions & image segmentation

I. 5.1 Thresholding Methods

•Image segmentation : An image segmentation is the partition of an image into a set of

non-overlapping regions whose union is the entire image

Figure 5—1

Detection of discontinuities : points, lines, and edges

Figure 5—2

40
Figure 5—3

q
R = w1 z1 + w2 z 2 + ⋅ ⋅ ⋅ ⋅ ⋅ + wq z q = ∑ wi z i
i =1

A mark used for detecting isolated points

Figure 5—4

i. Point detection

The detection of isolated points in an image is straightforward

If | R| > T when T is threshold, it’s an isolated point

ii. Line detection

The next level of complexity involves the detection of lines in an image. Consider

the four masks, as shown below

41
Figure 5—5

• If at a certain point in the image, |Ri|>|Rj| for all j≠i that point is said to be
more likely associated with a line in the direction of mask i, for example, if
|R1|>| Rj| , for j=2,3,4 the line is more likely associated with a horizontal line.

iii. Edge detection

The idea underlying most edge detection technologies is the computation of a local
derivative operator

Figure 5—6

42
 Gradient operator

 ∂f 
 ∂x 
∇f =  
 ∂f 
 ∂y 

2 2
 ∂f   ∂f 
∇f =   +  
 ∂x   ∂y 

Figure 5—7

 Laplacian Operator
∂2 f ∂2 f
∇2 f = +
∂x 2 ∂y 2

∇ 2 f = 4 z 5 − ( z 2 + z 4 + z 6 + z 8 ) in discrete form

Figure 5—8

43
Cross Section of ∇ 2 f
∇ 2 f = 4z5 − (z 2 + z4 + z6 + z8 )

II. Thresholding

• Thresholding is one of the most important approaches to image segmentation

Figure 5—9

1 if f ( x, y ) < T
g ( x, y ) = 
0 otherwise

III. Region-oriented segmentation

i. Basic Formulation

Figure 5—10

44
Let R represent the entire image region and Ri the partitioned region for n=1,2,………n
n
(a) ∪ Ri = R
i =1
(entire image region)

(b) Ri is a connected region, i=1,2,……n

(c) Ri ∩ R j = φ for all i & j , i ≠j

 R1 ∩ R5 = φ

 R1 ∩ R2 = φ
R ∩ R ≠ φ
 1 1

(d) P( Ri ) = True for i=1,2,…….n

Figure 5—11

ii. Region Growing by Pixel Aggregation

“Region Growing” is a procedure that groups pixels or sub-regions into larger regions.

The simplest of these approaches is pixel aggregation, which starts with a set of “seed”

points and from these grows regions by appending to each seed

Point those neighboring pixels that have similar properties

45
Ex 1

Figure 5—12

iii. Region splitting and Merging

 Spitting
Let R represent the entire image region and select a predicate P as discussed in the
previous section For a square image, one approach for segmenting R is to subdivide it
successively into smaller and smaller quadrant regions so that, for any region Ri,
P(Ri) = TRUE that is, if P(Ri or R) = FALSE, divide the image into quadrants

Figure 5—13

 Merging
If only splitting were used, the final partition likely would contain adjacent regions
With identical properties, this draw back may be remedied by allowing merging as well
as splitting two adjacent regions Rj &Rk are merged only if P(Rj∪Rk)=TRUE

Figure 5—14

46
• The preceding discussion may be summarized by the following procedure in
which, at any step, we split into four disjointed quadrants any region Ri where
P(Ri)= FALSE

Figure 5—15

a) merge any adjacent regions Rj & Rk for which P(Rj∪Rk) =TRUE ;


b) stop when no further merging and splitting is possible

Ex 2

Figure 5—16

P( R) = FALSE P( R21 ) = TRUE


P( R1 ) = TRUE P( R34 ) = FALSE
P( R2 ) + FALSE P( R43 ) = FALSE
P( R3 ) + FALSE P( R23 ∪ R32 ∪ R41 ) + TRUE
P( R4 ) + FALSE → Merging

Figure 5—17

47
6. Image Processing : Continuous images

I. Linear, Shift-Invariant Systems

Spatial domain → Point Spread Function (PSF)

Frequency domain → Modulation Transfer Function (MTF)

Figure 6—1

• Defocused image ( g ) is a processed version of the focused image ( f )

Double the brightness of ideal image and the defocused image. If the image system

moves, f & g moves the same amount.

Figure 6—2

Linearity : α f 1( x, y ) + β f 2( x, y ) → α g1( x, y ) + β g 2( x, y )

Shift : f ( x − a , y − b) → g ( x − a, y − b)

48
II. Convolution

f ( x , y ) → h ( x, y ) → g ( x, y )
∞ ∞
g ( x, y ) = ∫∫
−∞ −∞
f ( x − ε , y − η )h(ε ,η )d ε dη

→ g = f ⊗h

Linear α f 1( x, y ) + β f 2 ( ) → h( ) → g ( x, y ) = ?
∞ ∞
g ( x, y ) = ∫ ∫ [α f
−∞ −∞
1( x − ε , y − η ) + β f 2( x − ε , y −η )] ⋅ h(ε ,η )d ε dη

= α g1( x, y ) + β g 2( x, y )

Shift invariant

f ( x − a, y − b) → h( x, y ) → g ( x, y )
∞ ∞
g ( x, y ) = ∫∫
−∞ −∞
f ( x − a, y − b)h(ε ,η )d ε dη

Impulse response

f ( x, y ) → h( x, y ) → h( x, y )
∞ ∞
g ( x, y ) = ∫∫
−∞ −∞
f ( x − ε , y − η )h(ε ,η )d ε dη

and g ( x, y ) = h( x, y )

∞ Origin
Only possible if f ( x, y) = 
0 otherwise
This function is called unit impulse δ ( x, y) or Dirac delta function
∞ ∞

∫ ∫ δ ( x, y)dxdy = 1
−∞ −∞

49
∞ ∞
Shifting Property : ∫ ∫ δ ( x, y)h( x, y)dxdy = h(0, 0)
−∞ −∞
∞ ∞
and ∫ ∫ δ ( x − ε , y −η ) ⋅ h(ε ,η )dε dη = h( x, y)
−∞ −∞

=> h( x, y ) : Impulse Response of the system

Figure 6—3

k (ε ,η )δ ( x − ε , y − η ) : scaled delta function

∞ ∞
f ( x, y) = ∫∫
−∞ −∞
f (ε ,η ) ⋅ δ ( x − ε , y − η )d ε dη
∞ ∞
g ( x, y ) = ∫∫
−∞ −∞
f ( x − ε , y − η ) ⋅ h( x, y )dε dη

∞ ∞
b⊗a = ∫ ∫ a( x − ε , y −η ) ⋅ b( x, y)d ε dη
−∞ −∞
<= Let x − ε = α , y − η = β
∞ ∞
=∫ ∫ a(α , β ) ⋅ b( x − α , y − β )dα d β = b ⊗ a
−∞ −∞

Similarly, (a ⊗ b) ⊗ c = a ⊗ (b ⊗ c)

50
Cascade

f ⊗ h1 ( f ⊗ h 1) ⊗ h 2
f → h1  → h 2  → => f ⊗ (h1 ⊗ h2)

f → h1 ⊗ h2 → g

III. MTF

An Eigen function of a system is a function that is reproduced with at most a change in

amplitude.

jwt
e → → A( w)e jwt

→ f ( x, y ) = e 2π i (ux +vy )
∞ ∞
2π i{u ( x −ε ) + v ( y −η )}
g ( x, y ) = ∫ ∫e
−∞ −∞
⋅ h(ε ,η )d ε dη
∞ ∞
= e 2π i (ux +vy ) ∫ ∫e
−2π i ( uε + vη )
⋅ h(ε ,η )d ε dη
−∞ −∞

= e 2π i (ux +vy ) ⋅ A(u, v)

IV. Fourier Transform

Represent the signal as an infinite weighted sum of an infinite number of sinusoids (u: angular frequency)


F (u ) = ∫ f ( x ) e−iux dx
−∞

Note:

eik = cos k + i sin k i = −1

51
Arbitrary function => Single Analytic Expression

Spatial Domain(x)=> Frequency Domain(u) (Frequency Spectrum F(u))

 Inverse Fourier Transform (IFT)


1 ∞
f (x) =
2π ∫ −∞
F ( u ) eiux du
∞ ∞
1
∫ ∫ F (u, v)e
+ i ( ux + vy )
f ( x, y) = 2 dudv
4π −∞ −∞

V. The Fourier Transform of Convolution

Let g = f ⊗h

Now,

∞ ∞
G (u ) = ∫ ∫ g ( x) e
− i 2π ux
dx
−∞ −∞
∞ ∞

∫ ∫ f (τ ) h ( x − τ ) e
− i 2π ux
= dτ dx
−∞ −∞
∞ ∞
dτ   h ( x − τ ) e dx 
− i 2π u ( x −τ )
∫ ∫  f (τ ) e
− i 2π uτ
=
−∞ −∞
∞ ∞
= ∫
−∞
 f (τ ) e−i 2π uτ dτ  ∫
−∞
 h ( x ') e −i 2π ux 'dx '

= F (u ) H (u )

Convolution in spatial domain  Multiplication in frequency domain

Spatial Domain (x) Frequency Domain(u)

g = f ⊗ h ↔ G = FH
g = fh ↔ G = F ⊗ H

52
So, we can find g(x) by Fourier transform

Figure 6—4

Spatial Domain (x) Frequency Domain (u)

Linearity c1 f ( x ) + c2 g ( x ) c1 F ( u ) + c2G ( u )

Scaling f ( ax ) 1 u
F 
a a

Shifting f ( x − x0 ) e −i 2π ux0 F ( u )

Symmetry F ( x) f ( −u )

Conjugation f ∗ ( x) F ∗ ( −u )

Convolution f ( x ) ⊗ g ( x) F (u ) G (u )

Differentiation d n f ( x) ( i 2π u )
n
F (u )
n
dx

53
VI. Generalized Functions and Unit Impulses

• The integral of the one-dimensional unit impulse is the unit step function,
x

−∞
∫ δ (t )dt = u ( x) , where u ( x) = 1, for x ≻ 0;

1
= , for x = 0;
2

= 0 , for x ≺ 0 .

What is the Fourier transform of the unit impulse?


∞ ∞

∫ ∫ δ ( x, y ) e
− i ( ux + vy )
We have dxdy = 1 , as can be seen by substituting x = 0 and y = 0
−∞ −∞

into e −i ( ux + vy ) , using the sifting property of the unit impulse. Alternatively, we can use
∞ ∞ ε ε
1 1
∫ ∫ δε ( x, y)e ∫ e dx ∫ε e
− i ( ux + vy ) − iux − ivy
lim dxdy , or lim dy ,
ε →0 ε → 0 2ε 2ε
−∞ −∞ −ε −

sin uε sin vε
That is lim =1.
ε → 0 uε vε

VII. Convergence Factors and the Unit Impulse

We want a smoothed function of f(x)

g ( x) = f ( x) ∗ h ( x)

Let us use a Gaussian kernel

Figure 6—5

54
1  1 x2 
h (x) = exp  − 2 
2πσ  2σ 

Then

 1 2 
H ( u ) = exp  − ( 2π u ) σ 2 
 2 

G (u ) = F (u ) H (u )

H(u) attenuates high frequencies in F(u) (Low-pass Filter)!

VIII. Partial Derivatives


∂f ( x , y ) ∂f
What is the F.T. of and ?
∂x ∂x
∞ ∞
∂f −2π i (ux +vy )

 ∞ ∂f −2π ux  −2π vy
∫ ∫ ∂x
−∞ −∞
e dxdy = ∫  ∫ ∂x e dx  ⋅ e dy
−∞  −∞

∞ ∞
∂f −2π ux −2π iux ∞
∫−∞ ∂x e dx = f ( x, y)e −∞ − (−2π iu )−∞∫ f ( x, y) ⋅e dx
−2π iux



= f ( x, y)e−2π iux + (2π iu ) ∫ f ( x, y ) ⋅e−2π iux dx
−∞
−∞

We can’t proceed unless, lim f ( x, y ) = 0


x →±∞

In that case,
∞ ∞


−∞
2π iu ∫ f ( x, y )e−2π i (ux +vy ) dxdy = (2π iu ) ⋅ F (u , v)
−∞

 ∂f ( x , y ) 
∴F   = 2π iu ⋅ F (u , v)
 ∂x 
 ∂f ( x , y ) 
F  = 2π iv ⋅ F (u , v)
 ∂x 

55
IX. Rotational Symmetry and Isotropic operators

• Rotationally symmetric operators are particularly attractive because the treat

image features in the same way, no matter what their orientation is.

• Circular symmetry

Figure 6—6

Let us introduce polar coordinates in both spatial and frequency domain

Figure 6—7

x = r cos φ and y = r sin φ ux + vy = r ⋅ ρ cos(φ − α )

u = ρ ⋅ cos α and v = ρ sin α

56
∞ ∞
F { f ( x, y )} = ∫∫
−∞ −∞
f ( x, y)e−2π i (ux+ vy ) dxdy

∞∞
= ∫ ∫ fr (r )e −2π i ⋅r cos(φ −α ) ⋅ rdrd φ
0 0

Zeroth order Vessel function



(Hankel Transform) => F ρ ( ρ ) = 2π
−∞
∫ fr (r ) J 0(2π r ρ )rdr


fr (r ) = 2π ∫ f ρ ( ρ ) J 0(2π r ρ ) ρ d ρ
0


1
∫e
− i ⋅ x cos(θ − w )
where J 0( x) = dθ
2π 0

X. Blurring

1 x2 + y2
1 −
2 σ2
h ( x, y ) = e
2πσ 2

⇓ F .T .

∞ ∞ 1 x2 + y2
1 −
H (u , v) = ∫ ∫ 2πσ
−∞ −∞
2
e 2 σ2
⋅ e−2π i (ux +vy ) dxdy

2 2
∞ 1 x  ∞ 1 y 
1 −   1 −  

∫e e −2π iux dx ⋅ ∫e ⋅ e−2π ivy dy


2 σ  2 σ 
=
2πσ −∞ 2π σ −∞

2 2 2
∞ 1 x  ∞ 1 x  ∞ 1 x 
1 −   −   −  

∫e ∫e cos(2π ux)dx − i ⋅ ∫ e
2 σ  −2π iux 2 σ  2 σ 
e dx = ⋅ sin(2π ux)dx
2πσ −∞ −∞ −∞

57
The second integral is odd so, integral over symmetric region is zero.

2
∞ 1 x  π 2u 2
−   π − 1
∫e
2 σ 
And, cos(2π ux)dx = ⋅e a
,a =
−∞
a 2σ 2

2
∞ 1 x  1
−   − (2π ) 2 ⋅u 2 ⋅σ 2
∫e
2 σ 
cos(2π ux)dx = 2π ⋅ σ ⋅ e 2

−∞

1 1
1 − (2π ) 2 ⋅u 2 ⋅σ 2 1 − (2π ) 2 ⋅v 2 ⋅σ 2
H (u , v) = ⋅ 2πσ ⋅ e 2
⋅ ⋅ 2πσ ⋅ e 2

2π ⋅ σ 2π ⋅ σ
1
− (2π 2 )⋅( u 2 + v 2 ) ⋅σ 2
=e 2

Which is also rotationally symmetric. F {G a u s s i a n } = G a u ssia n

Figure 6—8: Low Pass Filtering

XI. Restoration and Enhancement

f ( x, y ) → h( x, y ) → h '( x, y ) → f ( x, y )

The cascade of the two systems is the identity system.

 1 
H '(u, v) = min 
 H (u, v) 
,A ,
 

Where, A is the maximum gain. Or more elegantly, we can use something like

58
H (u , v )
H '(u, v) = ,
H (u, v)2 + B 2
1
where is the maximum gain, if H (u , v ) is real.
(2 B)

XII. Correlation

Figure 6—9

Cross correlation of a( x, y) and b( x, y ) is defined as

φ ab( x, y ) = a ( x, y )* b( x, y )
∞ ∞
= ∫ ∫ a(ε − x,η − y ) ⋅ b(ε ,η )d ε dη
−∞ −∞

If a( x, y) = b( x, y ) , the result is called autocorrelation

φ ab( x, y) → Symmetric.(φ ab(− x, − y) = φ ab( x, y ))

φ aa (0, 0) ≥ φ aa ( x, y ) for all ( x, y )


∞ ∞ ∞ ∞

∫ ∫ a(ε ,η ) ⋅ a(ε ,η )dε dη = ∫ ∫ a


2
φ aa (0,0) = (ε ,η )dε dη
−∞ −∞ −∞ −∞

59
XIII. 6.13 Optimal Filtering

Figure 6—10

∞ ∞

∫ ∫ {o( x, y) − d ( x, y)} dxdy


2
E=
−∞ −∞

i ( x , y ) = b ( x, y ) + n ( x, y )

o ( x, y ) = i ( x, y ) ⊗ h ( x, y )

∞ ∞

∫ ∫ o ( x, y) − 2o( x, y) ⋅ d ( x, y) + d
2 2
So, E = ( x, y )dxdy
−∞ −∞

2
o 2 ( x, y ) = {i ( x, y ) ⊗ h( x, y)}
∞ ∞ ∞ ∞
= ∫ ∫ i( x − ε , y −η ) ⋅ h(ε ,η )d ε dη × ∫ ∫ i( x − α , y − β ) ⋅ h(α , β )dα d β
−∞ −∞ −∞ −∞

Therefore,
∞ ∞ ∞ ∞ ∞ ∞ ∞ ∞

∫ ∫ o dxdy = ∫∫∫∫ ∫ ∫ i( x − ε , y −η ) ⋅ i( x − α , y − β )dxdy


2
h(ε ,η ) ⋅ h(α , β )dε dηdα d β
−∞ −∞ −∞ −∞ −∞ −∞ −∞ −∞

x −α = A ⇒ x = A +α
y−β = B⇒ y = B+β

60
∞ ∞
⇒ = ∫ ∫ i( A − (ε − α ), B − (η − β ) ⋅ i( A, B)dAdB
−∞ −∞

= φ ii (ε − α ,η − β ) (Autocorrelation)

61
7. Image Processing – Discrete Image

I. Sampling Theorem
To convert a continuous distribution of brightness to a digital image, it necessary to
extract the brightness at each point arranged at a period. Thus is called sampling.

Assume an image as a one-dimensional function for simplicity.

Let the brightness at position x i.e. pixel value of pixel at x, be fix.

Figure 7—1

Figure 7—2

62
Suppose a function that is composed by infinite numbers of Dirac’s delta functions

assigned at an interval T. This is called comb function defined as follows.

A simple digital image from f ( x) , denoted fT ( x) , is expressed as f ( x ) multiplied by

the comb function combT ( x) , i.e.

fT ( x) = f ( x) ⋅ combT ( x)

FT { fT ( x )} (v ) = FT { f ( x )} (v ) ⊗ FT {combT ( x )} (v )

1
and, FT {combT ( x )} (v ) = comb T1 (V )
T
1
FT { fT ( x )} (v ) = FT { f ( x )} (v ) ⊗ comb T1 (V )
T

That’s a convolution of comb function?

We start from “convolution with delta function”

∞ ∞
f (t ) ∗ δ (t ) = ∫ f ( y ) ⋅ δ (t − y )dy = ∫ f (t ) ⋅ δ (t − t )dy = f (t )
−∞ −∞

i.e., convolution of a function and delta function is equal to original function itself.

Since a comb function is a sequence of the delta function arranged at constant period,

convolution of a function and comb function is an arrangement of duplication of the

original function at a constant period.

63
The F.T. of fT ( x) , which is the sampled version of f ( x ) at period T is an infinite

sequence of duplicating of FT { fT ( x )} which is the F.T. of the original function f ( x )

1
at period .
T

Figure 7—3

If the period of comb function in the frequency domain is sufficiently large, adjacent

FT { f ( x )} ’s do not overlap. In this case, the F.T. of the original function FT { f ( x )} ,

can be separated and extracted i.e. no information of the brightness distribution of the

original image is lost by sampling. However, if the internal of the comb function is the

frequency domain is small, adjacent FT { f ( x )} (v)’s overlap. In this case, the original

FT { f ( x )} can’t be separated and a faulty function will be extracted. This effect is

called aliasing.

64
⇒ FT { fT ( x )} (v ) = FT { f ( x )} (v ) ⊗ FT {combT ( x )} (v )

Figure 7—4

Since the support of FT { f ( x )} is in the range between −Vc ~ Vc , the period has to be at

1
least 2Vc to avoid overlapping of FT { f ( x )} ’s since, T is the sampling period.
T

denote the number of sampling per unit length, i.e. sampling rate consequently, the

original brightness distribution can be reconstructed by a sampled digital image of the

sampling rate is more than twice the maximum frequency contained in the original

distribution. This theorem is called sampling theorem.

65
 Discrete Fourier Transform (DFT)

∞∞ M −1 N −1 m n
1 −2π i ( u+ v)
F (u, v) = ∫ ∫ f ( x, y)e−2π i (ux +vy ) dxdy =
MN
∑ ∑ f (m, n) ⋅e
m =0 n =0
M N

0 0

(u=0~M-1, v=0~N-1)
M −1 N −1 m n
+2π i ( u+ v)
Inv. D.F.T. = ∑ ∑ F (u , v ) ⋅e M N
(m=0~M-1, n=0~N-1)
m=0 n = 0

• f(m,n) ≜ Input Signal Amplitude (real or complex) at sample (m, n)


T ≜ Sampling interval
1
fs ≜ Sampling rate (= ) , samples/second
T
M,N = Numbers of Spatial samples = Number of frequency samples (integer)

1
• Magnitude F (u , v) =  R 2 (u , v ) + I 2 (u , v)  2

I (u, v)
Phase Angle φ (u, v) = tan −1
R (u , v )
2
Power Spectrum P (u , v) = F (u , v) = R 2 (u , v) + I 2 (u , v)

M −1 N −1
1
• F (0, 0) =
MN
∑ ∑ f (m, n)
m =0 n = 0
⇒ Average of f (m, n) or D.C. component.

If f ( x, y ) is real, its F.T. is conjugate complex

F (u , v ) = F ∗ ( −u , − v )
⇒ Spectrum of D.F.T. is symmetric.
F (u , v) = F ∗ (−u , −v)

66
• Relationship between samples in the spatial and frequency domains.

1 1
△u = △v =
M ⋅ ∆x N ⋅ ∆y

• Translation

u0 v0
− i ⋅2 π ( ⋅x + ⋅ y )
f ( x, y) ⋅ e M N
⇔ F (u − u 0 , v − v 0 )
ux 0 vy 0
− i⋅ 2π ( + )
f ( x − x 0, y − y 0) ⇔ F (u, v)e M N

M N
Case : when u 0 = , v0 =
2 2
x y
i⋅ 2π ( + )
e 2 2
= eiπ ( x + y ) = cos {π ( x + y)} = (−1) x + y

M N
⇒ f ( x, y )( −1) x + y ⇔ F (u − ,v − )
2 2

And similarly,

M N
f (x − , y − ) ⇒ F (u , v)( −1)u + v
2 2

x = r ⋅ cos θ u = w ⋅ cos φ
Rotation
y = r ⋅ sin θ v = w ⋅ sin φ

Then, f ( x, y ) and F (u, v) become f (r ,θ ) and F ( w, φ )

67
f (r , θ + θ 0 ) ⇔ F ( w, φ + θ 0 )

⇒ Rotating f ( x, y ) by an angle θ 0 rotates F (u, v) by the same angle, and vice versa.

Periodicity F (u, v) = F (u + M , v) = F (u, v + N ) = F (u + M , v + N )

The inverse transform is also periodic.

f ( x, y) = f ( x + M , y) = f ( x, y + N ) = f ( x + M , y + N )

M −1 − j 2π u x M −1 − j 2π v y
1 1
Separability F (u , v ) =
M
∑e
x =0
M

N
∑e
y =0
N

M −1 x M −1 − j 2π v y
1 − j 2π u 1
=
M
∑ F (u , v )e
x =0
M
, (where F (u, v) =
N
∑e
y =0
N )

f (x, y) → F (x, v) → F (u , v )

1-D 1-D

Row transform column transform

• Convolution
M −1 N −1
1
f ( x, y) ⊗ h( x, y ) =
MN
∑ ∑ f (m, n) ⋅ h( x − m, y − n)
m=0 n =0

f ⊗h ⇒ F ⋅H
→ Pr ove( H .W .)
f ⋅h ⇒ F ⊗ H

68
8. Edge & Edge detection

I. Edges in Images

• An Edge is a boundary between two figures with relatively distinct grey-level

properties.

• Edges are curves in the image where rapid changes occur in brightness or in the

spatial derivatives of brightness.

• Change of brightness can be occurred at

• Change of surface orientation

• Different surfaces

• One object occludes another

• A boundary between light and shadow falling on a single surface

II. Differential Operators

Figure 8—1

69
Figure 8—2

xsinθ − ycosθ + ρ = 0

E ( x , y ) = B1 + ( B2 -B1 ) u( xsinθ − ycosθ + ρ )

If t > 0 B1
t<0 B2
t=0 ½(B1 + B2)
t
u (t ) = ∫ δ ( x )dx
−∞

∫−∞
δ ( x)dx = 1

Figure 8—3

∂E
= sin θ ( B1 − B2 )δ ( x sin θ − y cosθ + ρ )
∂x
∂E
= − cosθ ( B1 − B2 )δ ( x sin θ − y cosθ + ρ )
∂y

70
squared gradient

2 2
 ∂E   ∂E 
  +   = ((B1 − B2 )δ ( x sin θ − y cosθ + ρ ))
2

 ∂x   ∂y 

Laplacian of E(x,y)

∂2 E ∂2E
∇2 E = +
∂x 2 ∂y 2
∂2 E
Exx = = sin 2 θ ( B1 − B2 )δ '( x sin θ − y cos θ + ρ )
∂x 2
∂2E
E yy = 2 = cos 2 θ ( B1 − B2 )δ '(a sin θ − y cos θ + ρ )
∂y
∇ 2 E = ( B1 − B2 )δ ' ( x sin θ − y cos θ + ρ )

III. Discrete Approximations

• Discrete form of Laplacian

Ex = E( i+1 , j ) – E( i , j )

Ey = E( i , j+1 ) – E( i , j )

Exx = E( i – 1 , j ) – 2E( i , j ) + E( i + 1 , j )

Exx + Eyy = (Ei-1 j + Eij-1 + Ei-1 j + E i j+1 ) – 4Eij

71
IV. 8.4 Local operators and noise

1
H ' = −ρ 2 ⋅ -------------ⓐ
1+
φ nn
φ bb

ρ2 = u2 + v2

S2
suppose that φ bb = & φ nn = N 2
ρ2
ρ 2S 2
Then H ' = − -ⓐ
S2 + ρ2N 2

at low frequency -ρ2 : Laplacian operation

S2
at high frequency - << ∞ small gain Noise can be reduced !
N2

Derivation of ⓐ

Figure 8—4

ϕiα ϕbb + ϕ nb
H '' = =
ϕii ϕbb + ϕ bn + ϕ nb + ϕ nn
ϕbb
=
ϕ bb + ϕ nn
1 1
= (= )
1+
ϕ nn SNR
ϕbb

H ' = H '' ⋅ ℑ(∇ 2 )

72
For example, if the optimal filter is a gaussian

Figure 8—5

x 2 + y2
1 −
2σ 2
h( x , y ) = 2
e
2πσ
1
− σ 2 (u 2 +v 2 )
H (u,0) = e 2

73
9. Lightness & color

Figure 9—1

b´(x,y) = e´(x,y) - r´(x,y)

• Note that r´(x,y) is constant within a patch and sharp discontinuities at edges

between patches and e´(x,y) varies smoothly

Figure 9—2

Task : separate r´(x,y) & e´(x,y)

• Take the logarithm of image brightness b´(x,y)

log b´(x,y) = log r´(x,y) +log e´(x,y)


b(x,y) = r(x,y) +e(x,y)
∇2b(x,y) =∇2r(x,y) +∇2e(x,y)

Figure 9—3

74
• cut slow slope by a threshold

Figure 9—4

T(∇2b(x,y) = ∇2r(x,y)

|∇2b(x,y)| > T then t(x,y) = ∇2b(x,y) else t(x,y) = 0

• recover r(x,y)

Figure 9—5

∇ 2 l ( x, y ) = t ( x, y )
(l ( x, y ) = ∫ ∫ ∇ 2 r ( x, y ) dxdy

ℑ[∇ 2 l ( x, y )] = ℑ[t ( x, y )]
Fourier Transform : ⇒ − ρ 2 L( ρ ) = T ( ρ )
1
⇒ L( ρ ) = − 2 T (e)
ρ

= G( ρ ) ⋅ T ( ρ )
1
where..G ( p) = −
ρ2
1
g (r ) = log( r ) + ρ

IFT : I(x,y) = g(r) * t(x,y)

75
Figure 9—6

I(x,y) = r(x,y) = reflectance

e(x,y) = b(x,y) – I(x,y)

Figure 9—7

76
b1 ( x, y ) = r1 ( x, y)e1 ( x, y )
b2 ( x, y ) = r2 ( x, y )e2 ( x, y )
b3 = r3 ⋅ e3
b4 = r4 ⋅ e4
b5 = r5 ⋅ e5
. . .

b1 re 37
R1 : = 1 1 = (e1 ≅ e2 ) - equation 9.1
b2 r2 e2 74
r1 26
= - equation 9.2 16
r2 52
b2 28 r
R2 : = = 2  equations
b5 112 r5
b2 38 r2
= =
b3 95 r3

Least square error 11 variables 16 equations

0 ≤ reflectance ≤ 1

77
10. Reflectance Map : photometric stereo

(using one camera)

I. Radiometry
δp
irradiance E = geometric stereo (using one camera)
δA
δ2p
radiance L =
δAδW

solid angle = Area of a spherical surface patch / (radius of the sphere)2

Figure 10—1

1
⋅ 4π R 2
Solid angle = 2 = 2π
R2

Figure 10—2

78
Figure 10—3

(− p,−q,1)
nˆ =
1 + p 2 + q2

II. Image Formation

Figure 10—4

79
δO cosθ
solid angle =
( z / cos α ) 2
SI cos α
solid angle =
( S / cos α ) 2

2
δO cosα  z 
=   --------- equation 10.2.1
δI cosθ  S 

The solid angle subtended by the lens

2
α 
π   cos α 2
π α 
Ω=  
2
2
=   cos 3 α
( z / cosα ) 4 z

Thus the power of the light originating on the patch and passing through the lens is

δP = L ⋅ δOΩ cos θ

where L is the radiance of the surface in the direction toward the Lens

2
π α 
δp = LδO   cos 3 α cosθ
4 z
2
δP δO π  α  3
E= =L   cos α cosθ
δI δI 4  z 

where E is the irradiance of the image at the patch under consideration

substituting for δO δI , we finally obtain

80
2
π α  4
E=L   cos α
4 s 
∴E ∝ L
2
π α  4
E = L⋅   cos α
4s

III. Bidirectional reflectance distribution function (BRDF)

Figure 10—5

Figure 10—6

81
δE (θ i , φ i ) ; irradiance

The amount of light falling on the surface from the direction (θ i , φ i )

δ L (θ e , φ e ) : radiance

The brightness of the surface as seen from the direction

(θ e , φ e )

δL(θ e , φ e )
f (θ i , φ i ;θ e , φ e ) =
δE (θ i , φ i )
π
π
BRDF : ∫π∫
− 0
2
(θ i ,φ i ;θ e ,φ e ) sin θ e cosθ e dθ e dφ e
=1

IV. Extended Light sources

Figure 10—7

The.area
solid angle δw = The area = sin θδθ i δφ i
(dis tan ce) 2

82
π
π 2
(proof : ∫ ∫ sw = ∫π ∫ sin θ δθ δϕ
− 0
i i i = 2π

E (θ i , φ i ) : Radiance per unit solid angle coming from the direction (θ i , φ i )

The radiance from the patch under consideration

E (θ i , φ i )δω = E (θ i , φ i ) sin θ i δθ i δφ i

Total irradiance of the surface is

π
π 2
E0 = ∫ ∫ E (θ , ϕ ) sin θ cos θ dθ dϕ
−π 0
i i i i i i

where cos θ i accounts for the foreshortening of the surface as seen from the direction

(θ i , φ i )

The radiance of the surface,

L(θ e , φ e ) = ∫ ∫ f (θ i , φ i jθ e , φ e ) E (θ i , φ i )

sin θ i cosθ i dθ i dφ i

V. Surface Reflectance properties

• Ideal Lambertian surface

One that appears equally bright from all viewing directions and reflects all

incident light, absorbing none

83
BRDF f (θ i , φ i jθ e , φ e ) constant

π
π 2

∫ ∫ f (θ , φ , jθ , ϕ ) sin θ
−π 0
i i e e e cos θ e dθ e dϕe = 1

π
2
2π f ∫ sin θ e cos θ e dθ e = 1
0

π f =1
1
∴f =
π

sine BRDF is constant for a Lambertian surface

L = ∫ ∫ f ⋅ E (θ i , φ i ) sin θ i cosθ i dθ i dφ i
1 1
= f ⋅ Eo = ⋅ E o (∵ f = )
π π

where the irradiance is Eo

• Ideal specular Reflector

Figure 10—8

84
π
π 2

∫ ∫ f (θ , ϕ ;θ , ϕ ) sin θ
−π 0
i i e e e cos θ e dθ e d ϕe = 1

⇒ k sin θ i cos θ i = 1
1
⇒k =
sin θ i cos θ i

f (θ i , φ i jθ e , φ e ) = δ (θ e − θ i )δ (θ e − φ i − π )
In this case,
sin θ i cosθ i

Determine the radiance of a specular reflecting surface under an extended source where

the irradiance is Eo

π
π 2
δ (θ e − θ i )δ (ϕe − ϕi − π )
L(θ e , ϕ e ) =
− 0
∫π ∫ sin θ i cos θ i
E (θi , ϕi ) ⋅ sin θ i ⋅ cos θ i dθ i d ϕi
= E (θ e , ϕ e − π )
= E (θ i , ϕi )

VI. Surface Brightness

Figure 10—9

85
1
L= E cosθ i
π

•Light source : a “sky” of uniform radiance E

π
π 2
1
L= ∫π ∫ π E ⋅ sin θ cosθ dθ dϕ
− 0
i i i i

π
π 2
E
=
π −π 0
∫ ∫ sin θ cosθ dθ dϕ
i i i i

π
 − cos 2θ i  2 E
=E  = 2 [1 + 1] = E
 2 0

The radiance of the patch ≡The radiance of the source

VII. Surface orientation

Figure 10—10

86
n = rx × ry = (− p,− q,1)
rx = (δ x ,0, pδ x )T
ry = (0, δ y , qδ y ) T

The unit surface normal n̂

n (− p,−q ,1) T
nˆ = =
n 1+ p2 + q2

Figure 10—11

vˆ ≡ (0,0,1) T

assuming α is very small

(− p,− q,1)
nˆ =
1+ p2 + q2
(− p,−q,1) 1
cosθ = nˆvˆ = ⋅ (0,0,1) T =
2 2
1+ p + q 1+ p2 + q2

87
VIII. The reflectance Map

Figure 10—12

sˆ1 → R1 ( p, q )
sˆ 2 → R2 ( p , q )

1
L= E cosθ i for θ i ≥ 0
π

(− p ,−q ,1) (− p s ,−q s ,1) T 1 + ps p + qs q


cosθ i = nˆ ⋅ sˆ = ⋅ =
2 2 2 2
1+ p + q 1+ p + q
s s 1 + p 2 + q 2 1 + p x2 + q s2

= R(p,q) ; Reflectance Map

1 + ps p + qs q
R ( p, q ) = = c - equation 10.9.1
1 + p 2 + q 2 1 + p s2 + q s2

Figure 10—13

88
1 + p s p1 + q s q1
R1 ( p, q ) =
1 + p12 + q12 1 + p s2 + q s2
1 + ps p2 + qs q2
R2 ( p , q ) =
1 + p12 + q12 1 + p s2 + q s2

IX. Shading in image

Consider a smoothly curved object the image of such an object will have spatial
variations in brightness due to the fact that surface patches with different orientations
appear with different brightness the variation of brightness is called shading

2
πd
E = L⋅   cos 4 α
4  f 
EαLα cosθ i ≡ R( p , q )

after normalizing E(x,y) = R(p,q)

ρ : reflectance factor where 0 ≤ ρ ≤ 1

E ( x, y) = p ⋅ ( sˆ ⋅ nˆ)
where ρ = 1, E ( x, y ) = sˆ ⋅ nˆ

Ex1) Consider a sphere with a Lambertian surface illuminated by a point source at

essentially the same place as the viewer in the case of cylinder

89
Figure 10—14

sˆ = (0,0,1)
In this case, vˆ = (0,0,1)
(− p,−q,1)
nˆ =
1+ p2 + q2

From equation 10.9.1 - R( p , q ) = nˆ ⋅ sˆ

1 + ps p + qs q
=
2 2
1 + p 2 + q 2 1 + Ps + q s
1
=
1+ p2 + q2

1
d 1 − x
p = z = (r 2 − ( x 2 + y 2 )) 2 (−2 x) = −
dx 2 z − z0
dz y( z − z 0 ) 2
q= =−
dy z − z0

Finally, E(x,y) = R(p,q)

90
1
=
1+ p2 + q2
1
=
x2 + y2
1+
( z − z0 ) 2
x2 + y2
= 1−
r2

(x,y)=(0,0)

E( x, y) maximum = 0

x2+y2 =r2

E(x,y) minimum = 0

X. Photometric stereo

Figure 10—15

91
1 + p s1 p + q s1 q
E1 = R1 ( p, q ) =
2 2
1 + p 2 + q 2 1 + Ps1 + q s1
1 + ps2 p + qs2 q
E 2 = R 2 ( p, q ) =
2 2
1 + p 2 + q 2 1 + Ps 2 + q s 2

Ex2) R(p, q) : linear and independent

1 + p1 p + q1 q
R1 ( p, q ) =
r1
1 + p2 p + q2 q
R2 ( p , q ) =
r2

( E12 r1 − 1)q 2 − ( E 22 r2 − 1)q1


then p =
p1 q 2 − q1 p 2
( E22 r1 − 1) p1 − ( E12 r1 − 1) p 2
q=
p1q 2 − q1 p 2

Ex3) R(p,q) : nonlinear

Figure 10—16

92
XI. Albedo : reflectance factor :

Ei = ρ (sˆi ⋅ nˆ )

(− p i ,−q i ,1) T
where sˆi = for i=1, 2, 3, …………
1 + p i2 + q i2

(− p,−q,1)T
nˆ =
1+ p2 + q2

Figure 10—17

Unknown variables p, q, ρ

At least 3 equations

93
11. Reflectance Map : shape from shading

I. Recovering Shape from Shading

Figure 11—1

i. Picture

One more information => strong constraint smoothness neighboring patches of the
surface => similar orientation assumption of smoothness : strong constraint

ii. Linear reflectance map

Figure 11—2

94
b a b
tan θ 0 = cosθ 0 = sin θ 0 =
a a +b 2 2
a + b2
2

Figure 11—3

m(θ ) = p cosθ + q sin θ : directional derivative

⇒ m(θ 0 ) = cosθ 0 + q sin θ 0


ap + bq 1
= = f −1 ( E ( x, y ))
2 2 2 2
a +b a +b

S can be recovered from E(x,y) using above curve

Figure 11—4

95
δz = mδξ (small change a long the characteristic curve)

δz 1
m= = f −1 ( E ( x, y ))
δξ 2
a +b 2

where x(ξ ) = x0 + ξ cosθ


y (ξ ) = y 0 + ξ sin θ
ξ
z (ξ ) = z0 + ∫ m(θ )d ξ
0
ξ
1
∫f
−1
= z0 + ( E ( x, y ))d ξ
2 2
a +b 0

iii. Rotationally symmetric reflectance maps

Figure 11—5

96
q
tan θ s =
p
p
cosθ s =
p + q2
2

q
sin θ s =
p + q2
2

The slope m(θ s ) = p cosθ s + q sin θ s

= p2 + q2

= f −1 ( E ( x, y ))

The change in Z

−1
δz = mδξ = p2 + q2 ⋅ f ( E ( x, y))δξ
p
δx = cosθ sδξ = δξ
p + q2
2

q
δy = sin θ s δξ = δξ
p + q2
2

1
To simplify δz , we could take a step of length δξ rather than δξ
p + q2
2

δz = p2 + q2 × p2 + q2
= ( p 2 + q 2 )δξ
= f −1 ( E ( x, y ))δξ

δx = pδξ - equation 11.1.1

δy = qδξ

97
We need to determine p&q at the new point in order to continue the solution

E ( x, y) = f ( p 2 + q 2 )
Differentiate E ( x, y ) w, r , t x & y
∂E ∂E ∂S ∂S
Ex = = ⋅ = f ' (S )
∂x ∂S ∂x ∂x
2 2
where S = p + q
∂s ∂p ∂q ∂z ∂z
= 2p + 2q , p = , q =
∂x ∂x ∂x ∂x ∂y
∂2 z ∂2z
= 2p + 2 q
∂x 2 ∂x∂y

 Ex = 2( pr + qs ) f '( s )

 E y = 2( ps + qt ) f '( s) - equation 11.1.2
∂p ∂p ∂ 2 z ∂2 z
δ p = δx+ = 2 δx+ δy
∂q ∂y ∂x ∂x∂y

= rδx + sδy

δq = sδx + tδy

in our case δx pδξ & δy = qδξ

δp = ( pr + qs) ⋅ δξ
δq = ( ps + qt ) ⋅ δξ
Ex
δp = δξ
2f'
From (2)
Ey
δq = δξ
2f'

As δξ → 0 , we obtain the differential equations

98
 xɺ = p, yɺ = q, zɺ = p 2 + q 2 = f −1
( E ( x, y))

 Ex Ey
 pɺ = , qɺ =
 2f' 2f'

(x, y, z, p & q) solvable

5 Differential Eqs

one more time by differentiating

xɺ = p, yɺ = q, w.r.t ξ
Ex Ey
ɺxɺ = pɺ = , ɺyɺ = qɺ =
2f' 2f'
zɺ = f −1 ( E ( x, y ))

iv. General Case

R(p,q) = f(ap + bq)


R(p,q) = f(P2 + q2)
R(p,q) = arbitarary

δz = pδx + qδy
∂2 z ∂2 z
δp = δ x + δy - equation 11.1.3
∂x 2 ∂x∂y
= rδx + sδy
∂2 z ∂2z
δq = δx + 2 δy
∂x∂y ∂y
= sδx + tδy

99
 δp   r s  δx   δx 
  =    = H ⋅  
 δq   s t  δy   δy 

where H is the Hessian Matrix

∂2 z ∂2 z
r+t = + : Laplacian
∂x 2 ∂y 2

Differentiating E(x,y) w.r.t x & y

∂E ∂E ∂p ∂E ∂q
Ex = = ⋅ + ⋅
∂x ∂p ∂x ∂q ∂x

∂2 z ∂2z
= Rp ⋅ + Rq ⋅ S
∂x 2 ∂x∂y
= R p ⋅ r + Rq ⋅ S

E y = R p ⋅ S + Rq ⋅ t

 Ex  R 
  = H ⋅  p  - equation 11.1.4
E  R 
 y  q

from equation 11.1.3 & equation 11.1.4

 δx   Rp 
Let   =   ⋅ δξ
 δy   Rq 

xɺ = R p
yɺ = Rq

 δp   δx   R p   Ex 
Then   = H ⋅   = H ⋅   =  δξ
 δq   δy   Rq   E y 

100
 xɺ = R p

 yɺ = Rq , zɺ = pR p + qRq

 δz = p ⋅ δx + q ⋅ δy zɺ = pxɺ + qyɺ = pR p + qRq
 δξ δξ δξ

 pɺ = E x , qɺ = E y

5unknown variables : x,y,z, p & q

Figure 11—6

II. 11.3 Singular points

Figure 11—7

R( p , q ) < R( p 0 , q 0 ) for all ( p, q ) ≠ ( p 0 , q 0 )

Figure 11—8

101
Rp = 0, Rq = 0
xɺ = Rp = 0, yɺ = Rq = 0
1 2
R( p, q) = ( p + q 2 ) - equation 11.3.1
2

(p,q) = (0,0) – singular point

1
z = z0 + (ax 2 + 2bxy + cy 2 )
2

dz
Thus p = = ax + by
dx
dz
& q= = bx + cy
dy

substituting p & q in equation 11.3.1

1 2 1
R( p, q) = ( p + q 2 ) = [(ac + by) 2 + (bx + cy ) 2 ]
2 2
1 2 1
= (a + b 2 ) x 2 + (a + c)bxy + (b 2 + c 2 ) y 2
2 2

The brightness gradient is

E x = (a 2 + b 2 ) x + (a + c)by
E y = (a + c)bx + (b 2 + c 2 ) y

Differentiating again,

E = a 2 + b 2 
 xx 
 E xy = (a + c)b  3 equations, 3 unknowns
 2 2 
 E yy = b + c 

102
III. Stereo graphic projection

Figure 11—9

 b f 2 + q2
 =
R + a 2R
 2 2
b = ( p + q )
a R
 2 2 2
a + b = R

solve the above equations

2p 2q
f = ,g =
1+ 1+ p2 + q2 1+ 1+ p2 + q2

conversely

4f 4g
p= 2 2
,q =
4− f − g 4 − f 2 − g2

103
IV. 11.7 Relaxation methods

characteristic strip expansion

E(x , y) = R(p , q)

= Rs(f,g)

Minimize

e s = ∫ ∫ ( f 2 + fy 2 ) + ( gx 2 + gy 2 )dxdy

A low, Minimize

ei = ∫ ∫ ( E ( x, y ) − Rs ( f , g )) 2 dxdy

overall, minimize es + λei

if brightness measurement are very accurate,

λis large

if brightness measurement are very noisy,

λis small

104
minimize ∫∫ I F ( f , g , f x , f y , g x , g y )dxdy

= It’s a problem in the calculus of variations.(see appendix)

The corresponding Euler equations are

∂ ∂
Ff − Ffx − Ff y = 0
∂x ∂y
∂ ∂
Fg − Fg x − Fg y = 0
∂x ∂y

where Ff is the partial derivative of F with respect to F

2 2 2 2
F = ( f x + f y ) + ( g x + g y ) + λ ( E ( x, y) ⋅ Rs ( f , g )) 2

The Euler equations for this problem yield

∂Rs
F f = −2λ ( E ( x, y) − Rs ( f , g ))
∂f
∂ ∂
F fx = ⋅ (2 f x ) = 2 f xx
∂x ∂x

F f = 2 f yy
∂y y

∂ ∂
2( f xx + f yy ) = Ffx + Ffy = F f
∂x ∂y
∂Rs
= −λ ( E ( x, y ) − Rs ( f , g ))
∂f
∂Rs
∇ 2 g = −λ ( E ( x, y ) − Rs ( f , g )) (2 equations, 2 unknowns(f,g))
∂g

Application to photometric stereo consider n images,

105
n
e = ∫ ∫ I (( fx 2 − f y ) + ( g x + g y ))dxdy + ∑ λi ∫ ∫ I ( Ei ( x, y ) − Ri ( f , g )) 2 dxdy
2 2 2

i =1

where Ei(x,y) is the brightness measured in the i th image and Ri is the corresponding

reflectance map

The Euler equations for this problem yield

n
∂Ri
∇ 2 f = −∑ λi ( Ei ( x, y) − Ri ( f , g ))
i =1 ∂f
n
∂Ri
∇ 2 g = −∑ λi ( Ei ( x, y ) − Ri ( f , g ))
i =1 ∂g

V. 11.8 Recovering Depth from a needle diagram

Figure 11—10

106
Figure 11—11

( x, y )

z ( x, y ) = z ( x0 , y0 ) + ∫
( x0, y 0)
( pdx + qdy )

Minimize the error

∫ ∫ (( z
I x − p ) 2 + ( z p − p ) 2 )dxdy

where p and q are the given estimates of the components of the gradient while z x & z y

are the partial derivatives of the best-fit surface

Minimize ∫∫ I F ( z , z x , z y )dxdy

The Euler equations is

∂ ∂
Fz − Fz x − Fz y = 0
∂x ∂y

107
so that from F = ( z x − p ) 2 + ( z y − q ) 2

∂ ∂
Fz = Fz x + Fz y
∂x ∂y
∂ ∂
= 2 ⋅ ( z x − p) + 2 ⋅ ( z y − q )
∂x ∂y
=0

2( z xx − p x ) + 2( z yy − q y ) = 0
z xx + z yy = p x + q y
(∇ 2 z = p x + q y )

108
12. Motion field & Optical flow

Figure 12—1

• Motion field : a purely geometric concept


• Optical flow : Motion of brightness patterns

observed when a camera is moving relative to the objects being imaged

I. Motion Field

Figure 12—2

109
δr δr
v0 = 0
, &v i = i

δt δt
- equation 12.1.1
ri r
= 0
f ' r 0 ⋅ zˆ

Differentiate - equation 12.1.1

dr dr
d r (r0 ⋅ zˆ) 0 − ( 0 ⋅ zˆ)r 0
1 i dt dt
⋅ = 2
f ' dt (r0 ⋅ zˆ)
(r ⋅ zˆ)v0 − (v0 ⋅ zˆ)r0
= 0
(r 0 ⋅ zˆ) 2
1 (r × v ) × zˆ
⋅ vi = 0 0 2 - equation 12.1.2
f' (r0 ⋅ zˆ)

II. Optical flow

Figure 12—3

110
Ex1)

Figure 12—4

E(x, y, t) ≡ E(x + uδt , y + vδt , t + δt) where u(x, y) & v(x, y) are the x & y components
of the optical flow vector

For a small time interval δt , if brightness varies smoothly with x, y, &t,


αE αE αE
E ( x + uδt , y + vδt , t + δt ) = E ( x, y, t ) + δx + δy + δt + ρ ≡ E ( x, y , t )
αx αy αt
where ρ contains second and higher order terms in δx , δy , and δt

∂E δx ∂E δy ∂E δt
⋅ + ⋅ + ⋅ =0
∂x δt ∂y δt ∂t δt

E x u + E y v + Et = 0 (Optical flow constraint equation)

u, v : two variables
one equation :
⇒ one more constraint needed

III. Smoothness of the optical flow

Usually, the motion field varies smoothly minimize a measure of departure from

smoothness

111
e s = ∫ ∫ ((ux 2 + uy 2 ) + (vx 2 + vy 2 ))dxdy

Also, minimize the error in the optical flow constraint eq.

ec = ∫ ∫ [ E ( x, y, t ) at + +δt − E ( x, y, t ) att ]2 dxdy = ∫ ∫ ( E x u + E y v + Et ) 2 dxd y

Hence, minimize e s + λec ,

Minimize e s + λec

Where e s = ∫ ∫ ((ux 2 + uy 2 ) + (vx 2 + vy 2 ))dxdy

ec = ∫ ∫ ( E x u + E y v + Et ) 2 dxdy

λ: a parameter that weights the error in the image motion equation relative to the

departure

from smoothness

λ=> Large

=> small

Minimize ∫ ∫ F ( u , v, u x , u y , v x , v y )dxdy

F = (u X2 + u Y2 ) + (v x2 + v y2 ) + λ ( E x u + E y v + Et ) 2

Euler equations are

∂ ∂
Fu − Fu x − Fu y = 0
∂x ∂y
∂ ∂
Fv − Fvx − Fv y = 0
∂x ∂y

112
Fu = 2λ ( E x u + E y v + Et ) E x
∂ ∂
Fu x = (2u x ) = 2u xx
∂x ∂x
∂ ∂
Fu y = (2u y ) = 2u yy
∂y ∂x

λ ( E x u + E y v + Et ) E x = u xx + u yy = ∇ 2 u

Similarly,

λ ( E x u + E y v + Et ) E y = ∇ 2 v
E x u + E y v + Et = 0

IV. Boundary conditions

*Natural boundary conditions (see appendix)

dy dx
Fu x = Fu y
ds ds
dy dx
Fv x = Fv y
ds ds

where S denotes arc length along the boundary curve

T
 dy dx 
nˆ =  ,−  is a unit vector
 ds ds 

perpendicular to the boundary

Rewriting the above conditions.

( Fu x , Fu y ) ⋅ nˆ = 0
( Fv x , Fv y ) ⋅ nˆ = 0

113
in our case, F = (u x2 + u 2y ) + (v x2 + v 2y ) + λ ( E x u + E y v + Et ) 2

(u x , u y ) ⋅ nˆ = 0
(v x , v y ) ⋅ nˆ = 0

V. Discrete case

From e = es + λec

= ∫ ∫ ((u x2 + u 2y ) + (v x2 + v 2y ))dxdy + λ ∫ ∫ ( E x u + E y v + Et ) 2 dxdy

e = ∑∑ ( sij + λ cij )
i j

1
si , j = ((u i +1, j − ui , j ) 2 + (u i, j +1 − ui , j ) 2 + (vi +1, j − vi , j ) 2 + (vi , j +1 − vi , j ) 2 )
4

cij = ( E x u ij + E y vij + Et ) 2

Differentiating w.r.t u kl & v kl yields

∂e
= 2(u kl − u kl ) + 2λ ( E x u kl + E y vkl + Et ) E x ,
∂u kl
∂e
= 2(v kl − vkl ) + 2λ ( E x u kl + E y v kl + Et ) E y ,
∂v kl

where u & v are average of u & v

114
∂e
=0
∂u kl
solve this
∂e
=0
∂v kl

(1 + λ ( E x2 + E y2 ))u kl = +(1 + λE y2 )u kl − λE x E y vkl − λE x Et ,


(1 + λ ( E x2 + E y2 ))v kl = −λE y E x u kl + (1 + λE x2 )vkl − λE y Et

Iterative scheme is

n +1 n
E x u kln + E y vkln + Et
u kl =u −
kl Ex ,
1 + λ ( E x2 + E y2 )
E x u kln + E y vkln + Et
v kln+1 = vkln − Ey .
1 + λ ( E x2 + E y2 )

Figure 12—5

115
13. Photogrammetry & Stereo

I. Disparity between the two images.

Figure 13—1

Disparity : xl' − xr' = D

b
x+
xl' 2 ← (blue line) − (1)
=
f z
b
x−
x r' 2 ← (red line) − (2)
=
f z
' '
y yl yr
= = − (3)
f f z

solve (1) (2) & (3)

b
( xl' − xr' ) =
f : disparity
z
(x' − x' ) / 2 (y' − y' ) / 2 bf
x = b l ' r ' ,y =b l ' r ' ,z = '
xl − x r xl − x r xl − x r'

116
1 1
z∝ ' '
=
x − xr D
l

II. Photogrammetry

Figure 13—2

xr
• absolute orientation

(rl , rr , known )

 xr 
y  r r = Rr l + r0
 r
 z r 

rotaional translational
component component
 r11 r12 r13   r14 
r r22 r23  r 
 21  24 
r31 r32 r33   r34 

117
where R is a 3*3 orthonormal matrix

RT ⋅ R = I

r11 xl + r12 y l + r13 z l + r14 = x r


r21 xl + r22 y l + r23 z l + r24 = y r
r31 xl + r32 y l + r33 z l + r34 = z r

3 equations => 12 unknowns

• Orthonormality

RT ⋅ R = I

r112 + r122 + r132 = 1


r212 + r222 + r232 = 1
r312 + r322 + r332 = 1

r11r21 + r12 r22 + r13 r23 = 0


r21r31 + r22 r32 + r23 r33 = 0
r31r11 + r32 r12 + r33 r13 = 0

Figure 13—3

118
rl 2 − rl1 = rr 2 − rr1

3+7 = 10 equations

12 unknown variables

P&Q : 10+3 = 13 equations

Figure 13—4

r r = R ⋅ r l + r 0 - equation 13.3.1

 r11 r12 r13   r14 


R = r21 r22 r23  r0 = r24 
 r31 r32 r33   r34 

III. Relative orientation

r l , r r ; not known z l , z r unknowns

The projection points ( xl' , y l' ) & ( x r' , y r' ) known determine the ratios of x and y to z

using

119
xl' xl y' y
= & l = l
f zl f zl
- - equation 13.4.1
x r' xr y r' y
= & = r
f zr f zr

12 + 2*(n)

substitute B into A

r11 xl + r12 yl + r13 z l + r14 = xr


f z z z f
(r11 xl' l + r12 y ' l + r13 z l + r14 ) = ( x r' r )
zl f f f zl

f z x
r11 xl' + r12 y l' + r13 f + r14 = x r' r = r
zl zl s
f z y
⇒ r21 xl' + r22 y l' + r23 f + r24 = y r' r = r - equation 13.3.2
zl zl s
f z
r31 xl' + r32 y l' + r33 f + r34 = f r =
zl zl

12 + 2n ; # of unknowns for n points

7 + 3n ; # of equations

7 from orthonormality & distance length

120
1 0 0  2
r =1
R ⋅ R = I 0 1 0  i
T
  r r = 0 if i ≠ j
0 0 1  i j

∴12 + 2n = 7 + 3n

n=5

n ≥5

IV. 13.5 Using a known relative orientation


zl zl
= s ; constant xl = xl' = xl/ ⋅ s yl = yl' ⋅ s z l = f ⋅ s
f f

where s is a scaling factor

from equation 13.3.2

x r = (r11 xl' + r12 yl' + r 13 f ) s + r14 = as + u


y r = (r21 xl' + r22 yl' + r23 f )s + r 24 = bs + v
z r = (r31 xl' + r32 yl' + r33 f ) s + r 34 = cs + w

x r' x as + u
= r =
f z r cs + w
y r' y bs + v
= r =
f z r cs + w

121
V. Computing depth

Once IR , r 0 , ( xl' , y l' ), ( x r' , y r' ) known, The depth ( z l , z r ) can be computed

as follows

Figure 13—5

zl
From equation 13.3.2, multiply by
f

xl' y' x'


(r11 + r12 l + r13 ) z l + r14 = r z r
f f f
xl' y' y'
(r21 + r22 l + r23 ) z l + r24 = r z r
f f f
xl' y'
(r31 + r32 l + r33 ) z l + r34 = z r
f f

From any two equations


Compute z l & z r
Then compute

 x' y ' 
r l = ( xl , yl , z l ) T =  l , l ,1 z l
 f f 
 x' y' 
r l = ( x r , y r , z r ) T =  r , r ,1 z r
 f f 

122
VI. 13.7 exterior orientation

Figure 13—6

rc = R ⋅ra + r0

r11 x a + r12 y a + r13 z a + r14 = xc


r21 xa + r22 y a + r23 z a + r24 = y c
r31 xa + r32 y a + r33 z a + r34 = z c

R T ⋅ R = I (orthonormality)

x' xc y' y
= & = c
f zc f zc
x' xc r11 x a + r12 y a + r13 z a + r14
= =
f z c r31 x a + r32 y a + r33 z a + r34
- equation 13.7.1
y ' y c r21 xa + r22 y a + r23 z a + r24
= =
f z c r31 x a + r32 y a + r33 z a + r34

# of equations = 2n +6
# of unknowns = 12
2n+6 = 12 => n=3 (at least we need three equations)

123
VII. Interior Orientation
• scaling error
• translation error
• skewing error
• shearing error

An affine transformation

xc y
x' = a11 ( ) + a12 ( c ) + a13
zc zc
xc y
y' = a 21 ( ) + a 22 ( c ) + a 23
zc zc

from equation 13.7.1

x' s11 x a + s12 y a + s13 z a + s14


=
f s31 xa + s 32 y a + s33 z a + s 34
y ' s 21 x a + s 22 y a + s 23 z a + s 24
=
f s 31 x a + s 32 y a + s 33 z a + s34

Rsij : not orthonormal

T
Rsij ⋅ Rsij ≠ I 12 unknowns

#of equations = 2n

∴ 2n=12 ; n=6

124
VIII. Finding conjugate points

i. Gray-Level Matching

• (reference figure 3-1)

b b
x+ x−
xl' x '
2& r = 2
=
f z f z

at a matched point

El ( xl' , y l' ) = E r ( xr' , y r' )

b b
x+ x−
E ~ El ( f 2 , y' ) = E ( f 2 , y' )
r
z z
(∵ y l ≅ y r = y )
' '

x' x bf
Let ≅ & d ( x' , y' ) =
f z z

1 1
El ( x'+ d ( x' , y ' ), y' ) = E r ( x'− d ( x' , y ' ), y' )
2 2

criterion : minimize e = e s + λei

where e s = ∫ ∫ (∇ ⋅ ∇d ) 2 dx' dy'

ei = ∫ ∫ ( El − Er ) 2 dx' dy '

(solution) using Euler equation

∂ ∂
Fd − Fd 'x − Fd ' = 0
∂x' ∂y' y

where

125
1 1
F = (∇ ⋅ ∇d ) 2 + λ[ El ( x'+ d ( x' , y' ), y ' ) − E r ( x'− d ( x' , y ' ), y ' )]2
2 2
∂E () 1 1 ∂E ()
Fd = 2λ[ El ( ) − E () − E r ()][ l ( ) − (− ) r ]
∂x' 2 2 ∂x'
1 ∂El ∂E r
=2λ[ El () − E r ()] [ + ]
2 ∂x' ∂x'

∂ ∂
Fdx ' + Fdy ' = 2∇ 2 (∇ 2 d )
∂x' ∂y'
1 ∂E () ∂E ()
∵ ∇ 2 (∇ 2 d ) = λ[ El () − E r ()] [ l + r ]
2 ∂x' ∂x'

126
14. Pattern classification
Reference – “ Pattern Classification & Scene Analysis”
Dudda & Hart (Wiley Interscience pub. C)

1. Bayes Decision Theory


Fundamental statistical approach to the problem of pattern classification
• Maximum likelihood estimation : parameters fixed but unknown
• Baysian estimation : parameters, random variables having some known a prior
distribution

(A) Maximum likelihood estimation

Figure 14—1

Given ; 1. Model of system

2. pata : outcome of some type of probabilistic experiment.

Goal : Estimate unknown model parameters from data

Let λ i , i = 1,2,..........n be n I.I.D. observations on a random variable X drawn from

f x (x :θ )

127
The joint lpdf of X 1 , X 2 ,........X n is f x1

Where L( x1 ,...........x n ;θ ) is called the likelihood function (LF)

The maximum likelihood estimation (MLE) θˆ is that value that maximizes the LF, that

is

L( x1 ,...........x n ;θˆ) ≥ L( x1 ,...........x n ;θ )

Ex1) Assume X ; N ( µ , σ 2 ) where σ is known. Compute the MLE of the mean μ

<solution> The LF for h realizations of X is on a

n
 1 
 exp − 1 2 
h
L( X ; µ ) = 
2 
 2σ
∑ (x i − µ)2 

 2πσ  i =0

since the log function is monotonic, the maximum of L(x ; μ) is also that of log L(x ; μ)

n
n 1
Hence log L( x; µ ) = − log(2πσ 2 ) −
2 2σ 2
∑ (x
i =1
i − µ)2

∂ log L( x; µ )
Set =0
∂u
n
1 1 n
This yields
∂2
∑ ( xi − µ ) = 0 µ̂ =
i= 0
∑ xi
n i =1

Ex2) Consider the normal p.d.f.,

128
n
1 1
f x ( x, µ , σ 2 ) =
2
exp(−
2σ 2 ∑ (x − µ) 2
)
2πσ i =1

The log likelihood function is


n
n 1
log L( x1 ,.......x 2 jµ , σ 2 ) = − log 2π − n log σ −
2 2σ 2
∑ (x
i =1
i − µ)2

∂L ∂L
Now set = 0, =0
∂µ ∂σ

Obtain the simultaneous equations

∑ (x
i =1
i − µ) = 0
n
n 1

σ
+
σ3
∑ (x
i =1
i − µ) 2 = 0

1 n
Hence µ̂ = ∑ Xi
n i =1
1 n
σˆ 2 = ∑
n i=1
( xi − µˆ ) 2

(B) Baysian Estimation

Suppose that an classifier to separate two kinds of lumber, ash and birch

Figure 14—2

129
A prior : • probability p(w1) : the next piece is ash

• probability p(w2) : the next piece is birch

Decision Rule : Decide w1 if p(w1) > p(w2)

Decide w2 otherwise

Bay’s Rule ;

p( x | w j ) ⋅ p(w j )
p( w j | X ) =
p ( x)
r
where p ( x) = ∑ p ( x | w j ) ⋅ p ( w j )
j =1

Bayes Decision Rule for minimizing the probability of error ;

Decide w1 if p(w1|x) > p(w2|x)

Decide w2 otherwise

Decide w1 if P(x| w1)p(w1) > p(x| w2)p(w2)

Decide w2 otherwise

If equally likely, p(w1)=p(w2), then

130
Decide w1 if p(x| w1) > p(x| w2)

Decide w2 if otherwise

Figure 14—3

Probability error : p(error)


p (error ) = ∫ p(error , x)dx
−∞

= ∫ p(error | x) p( x)dx
−∞

 p ( w1 | x) if we decide w2
where p(error|x) 
 p ( w2 | x) if we decide w1

Figure 14—4

131
(B-1) Generalized Form (for Bayes classifier)

Let Ω = {w1 ..........ws } be the finite set of S states of nature and A = {α 1 ........α a ) be the

finite set of possible actions.

Let λ (α i | w j ) be the loss incurred for taking action α i when the state of nature is w j

Let the feature vector X be a d-component vector-valued r.v

p( x | w j ) p( w j )
Then p ( w j | x) =
p( x )
s
Where p ( X ) = ∑ p ( x | w j ) p (w j )
j =1

The expected loss associated with taking action α i is


s
R(α i | X ) = ∑ λ (α i | w j ) ⋅ p (w j | X )
j =1

Target: to find a Bayes decision rule against p(wj) that minimizes the overall risk

The overall risk is given by

R = ∫ R(α ( x) | x) ⋅ p ( x)d x

clearly, if α (x) is chosen so that R(α ( x) | x) is as small as possible, for every X , then

the overall risk will be minimized Bayes decision rule for the general form; to minimize

the overall risk.

132
1.Compute the conditional risk
s
R(α i | X ) = ∑ λ (α i | w j ) ⋅ p ( w j | X )
j =1

for i=1,………..a

2.Select the action α i for which R(α i | X ) is minimum


3.Obtain the resulting minimum overall risk called the Bayes risk

B.2 Two category classification

Let λ ij = λ (α i | w j ) the loss incurred for deciding wi when the true state of nature is wj.

R(α 1 | X ) = λ11 p ( w1 | X ) + λ12 p ( w2 | X )


- equation 14.1
R(α 2 | X ) = λ 21 p (w1 | X ) + λ 22 p ( w2 | X )

The decision rule :

Decide w1 if R(α 1 | X ) < R(α 2 | X ) - equation 14.2

Decide w2 otherwise

If we substitute 1 into 2

We have

λ11 p ( w1 | X ) + λ12 p ( w2 | X ) < λ 21 p ( w1 | X ) + λ 22 p (w2 | X )


⇒ (λ 12 −λ 22 ) p ( w2 | X ) < (λ 21 − λ11 ) p (w1 | X )

Hence likelyhood ratio

133
Figure 14—5

<Minimum – error – rate classification>

Symmetrical on zero-one loss function

The loss λ (α i | w j ) = 0 i=j for i,j=1,……c

1 i≠j

all errors are equally costly

The conditional risk is


c
R(α i | X ) = ∑ λ (α i | w j ) ⋅ p ( w j | X )
j =1
c
= ∑ p( w j | X )
j≠i

= 1 − p ( wi | X )

To minimize the average prob of error, we should select the i that maximizes the a

posterior prob

p ( w i | X ) In other words, for minimum error rate j

Decide wi if p (wi | X ) > p ( w j | X ) for all j ≠ i,

B-3 The multicategory classification

134
A pattern classifier

Figure 14—6

g i ( X ), for i=1,………c ; discriminant function

The classification is said to assign a feature vector X to class w j

If g i ( X ) > g j ( X ) for all j ≠ i

g i ( X ) = p (wi | X ) ; a posterior prob

p ( X | wi ) p (w j )
= c
by Bayes rule
∑ p( X | w ) p(w
j =1
j j )

⇒ p ( X | wi ) p ( wi )
⇒ log p ( X | wi ) + log p ( wi )

Decision Rule

1. Divide the feature space into C division regions, R1,………..Rc

2. If g i ( X ) > g j ( X ) for all j≠i then X is in RI

135
In other words, assign X to wI

Ex) three category classification

Probability error calculation

P(error Xo) = P(error X 0 ) ⋅ P( X 0 )

= P( X 0 w2 ) P( w2 ) + P( x0 w3 ) P( w3 )

Total error


P(error) = ∫ P(error , x)dx
−∞

136