Documente Academic
Documente Profesional
Documente Cultură
Abstract—This paper presents a comparative study of two discussion in section 4, conclusion is showed in section 5.
competing features for the task of finding correspondence points Paper is finished with acknowledgement in section 6.
in consecutive image frames. One is the Harris corner feature
and the other is the SIFT feature. First, consecutive frames are
extracted from video data then two kinds of features are
II. SIFT
computed. Second, correspondence points matching will be SIFT [4] is first presented by David G Lowe in 1999 and it
found. The results of SIFT key points matching and Harris key is completely presented in [5]. As we know on experiments of
points will be compare and discussed. The comparison based on his proposed algorithm is very invariant and robust for feature
the result of implementation video data of outdoor with matching with scaling, rotation, or affine transformation.
difference condition of intensity and angle of view.
According to those conclusions, we utilize SIFT feature points
Keywords- SIFT; Harris conner; correspondence points to find correspondent points of two sequence images. The
matching SIFT algorithm are described through these main steps: scale-
space extrema detection, accurate keypoint localization,
I. INTRODUCTION orientation assignment and keypoint descriptor.
( Dxx + D yy ) 2 (r + 1) 2
< (8)
Dxx D yy − ( Dxy ) 2 r
E (u , v) ≈ ∑ [ I ( x, y ) + uI x + vI y − I ( x, y )]2 (12)
x, y
We tucked up this equation into matrix form
⎛ ⎡ I2 I x I y ⎤ ⎞⎟ ⎡u ⎤
E (u, v) ≈ [u v]⎜ ∑ ⎢ x ⎥ (13)
⎜ ⎢I x I y
⎝ ⎣ I y2 ⎥⎦ ⎟⎠ ⎢⎣ v ⎥⎦
E (u , v) = ∑ w( x, y )[ I ( x + u , y + v) − I ( x, y )]2 (11)
x, y
Where
• E is the difference between the original and the moved
window.
• u is the window’s displacement in the x direction
• v is the window’s displacement in the y direction Fig. 3. Harris corner detection results
• w(x, y) is the window at position (x, y). This acts like a IV. CORRESPONDENCE POINTS MATCHING FROM HARRIS
mask. Ensuring that only the desired window is used. AND SIFT FEATURES
V. CONCLUSIONS
[1] Luo Juan and Oubong Gwun, “A Comparison of SIFT, PCA-SIFT and
SURF” International Journal of Image Processing, Volume 3, Issue 5,
2010
[2] Herbert Bay, Tinne Tuytelaars, Luc Van Gool, “SUFT: speeched up
robust features”, ECCV, Vol. 3951, pp. 404-417, 2006
[3] R. Hartley, A. Zisserman. “Multiple view geometry in computer vision”,
Cambridge University Press, 2000.
[4] D. Lowe, “Object recognition from local scale-invariant features”. Proc.
of the International Conference on Computer Vision, pp. 1150-1157
Fig. 5. Matching points result from SIFT features 1999
[5] D. Lowe, “Distinctive Image Features from Scale-Invariant Interest
B. Comparison Points”, International Journal of Computer Vision, Vol. 60, pp. 91-110,
Based on experiment results, we can see that 2004.
correspondence point from Harris features can be obtained [6] C. Harris, M. Stephens, “A combined corner and edge detector”,
Proceedings of the 4th Alvey Vision Conference, pp. 147-151,
with low time consuming but it is very difficult to get high Manchester, UK 1998.
correct point. Whereas from SIFT features we can get high
correctness and robustness correspondence points.