Documente Academic
Documente Profesional
Documente Cultură
Pi19404
February 20, 2013
Contents
Contents
Feature Detection - Good Features To Track 3
0.1 Abstract . . . . . . . . . . . . . . . . . . . . . . 0.2 Image motion model . . . . . . . . . . . . . . 0.3 Implementation . . . . . . . . . . . . . . . . . . 0.3.1 Requirements and Default Values . 0.3.2 Computing Eigen Values . . . . . . . 0.3.3 Filtering the Corners Points . . . . 0.4 Code . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
3 3 7 7 7 9 9 10
2 | 10
Let us assume that some motion has occured between the two observation times. The motion is reflected as change in intensity patterns. We analyze the change in intensity pattern at each point in the image. The ammount of motion x; y is called the displacement of the point p x; y .
a@ A
a @ A
The image observed at time t can be obtained by taking the initial image observed at time t and moving every point by suitable displacement vector. The displacement vector is a function of image position x. Over a small interval of time the motion can be assumed to be affine ie translation,rotational and/or scaling .
3 | 10
dxx dyx
dxy dyy
(1)
tx ty
a t;
if D
aH C C
Thus to track motion we need to determine the 6 parameters. a point x in the first image frame move to a point Ax t in the second image frame where A D . Thus error in estimation is determined by taking a difference of points at Ax t in second image and point x in the first image. If the intensity value of points are same error is zero .
a IC
If we consider a small neighborhood about a point all of them would have the same displacement vector. Thus for a patch in the first image we would be able to locate a patch in the second image. However we consider a small interval of time and can assume the motion to be purely translational and D=0. Under purel translation motion A .
aI
(2)
W is the size of neighborhood and w(x) is weighting function. We assume that the error function has a global minimum. use a first order taylors expansion for J Ax d
@ C A
we
To minimize the error we differentiate it wrt motion translation parameters and set the derivative to zero.
4 | 10
@ C dA a J @xA C d a R RW jJ @xA C d
1 @x
1 @x
@J
Cd
2 @y
2 @y
@J
@J
Cd
1 @x
@J
I @xAj w@xA@x:
2 2 @y
@J
Cd Cd
R
@J
1 @x
@J
2 @y
@J
@ A I @X A a @J @t R d W @ @J A w @xA@x C d @x
J X
1
@J @J W @y @x @J 2
@xA@x
d1
@J @J W @y @x
w@xA@x C d
(3)
W @y
w@xA@x a
@J @J w W @t @y
@xA@x
ae
P "
2 Ix Ix Iy 2 Ix Iy Iy
a P ItIxIy T
We use small scaling factor of - and Thus given a pair of successive frame we evaluate d. This is done recursively till convergence is attained when the error is below a certain value. Thus a feature or patch can be tracked Reliably if d can be found at every frame. we are required to compute G , Thus required G is well conditioned.
In practical scenario for we have the elements of G bounded by largest pixel values,since largest derivative value can be 255. Thus eigen values are also bounded by largest available pixel values. Thus if derivative values are stronger we get a larger values of eigen values.Thus measure of eigen values gives us the strength of derivatives. Thus a good features is the one where minimum eigen value is greater than a specified threshold.
5 | 10
This leads to matrix being well conditioned The well conditioned matrix only indicates a large derivative value. However if the point being tracked is the same cannot be assumed. For verification after the estimation is performed we compute the error in prediction. If the error is larger than a specified values we consider that feature tracked are not the same. However in the present article only importance is to locate the pixels which leads to well conditioned matrix. Thus we have two frames we compute at all points the matrix G and minimum eigen value. Only points with minimum eigen value greater than a threshold can be tracked reliably and thus is a good feature to track. Thus task of feature detection is to identify such points in the image.
6 | 10
0.3 Implementation
0.3.1 Requirements and Default Values
The inputs to the algorithm are :1. Input Image 2. Number of corners to be detected 3. minimum eigen value threshold 4. minimum distance between corner points 5. Block size 6. Mask Image The outputs of algorithm are :1. the number of corners detected 2. location of corner points Edges are bounded by the maximum value taken by the pixel ie 1 ,and the user specified threshold for minimum eigen values can be specified as any number between 0 to 1 which is scaling factor for the maximum value of the edge pixel. The default is choosen as 0.01 of the maximum valued of edge pixel. Typically we also specify the minimum distance between the detected points so that all the feature points are not clustered around in one region. The default value is choosen to be 10 To determine the minimum eigen values ,we need to evaluated the features for the specified point about a small neighborhood of that point. The default value for neighborhood size is choosen as 15.
7 | 10
Feature Detection - Good Features To Track essentially a neighborhood operation. For computing this we require first derivatives along the x and y directions. We use Sobel Edge detection to perform the same. For edge detection we would also need to input the aperture size the default is taken as 3 After computing this at each point of the image we require 3 2 2 quantities Dx ; Dy ; Dx Dy . An easy way to compute the minimum eigen values over a block is to first take the average of a window of block size and then evaluate the eigen value at each point of the window.
"
PW Dx PW DxDy# PW DxDy PW Dy
2 2
"
PW Dx PW DxDy # PW DxDy PW Dy
2 2
a a
(a+c)
pa
( 2
( +c)2 2
4(acb2 )
(a+c)
p a c
)2 +4b2 )
Thus we set all the points whose minimum eigen values are less than say 1/100 of maximum eigen value to 0.
8 | 10
0.4 Code
we define a main f eatured etector base class containing methods and data common to all feature detector. the goodf eaturet ot rackc lass is derived class containing specific implementations of algorithms. The code OpenCV code can be found in code repository https: //github.com/pi19404/m19404/tree/master/FEATURE_DETECTOR or https: //code.google.com/p/m19404/source/browse/FEATURE_DETECTOR/
9 | 10
Bibliography
Bibliography
[1] Jianbo Shi and C. Tomasi. Good features to track. In: Computer Vision and
Pattern Recognition, 1994. Proceedings CVPR '94., 1994 IEEE Computer Society Conference on.
doi: 10.1109/CVPR.1994.323794.
10 | 10