Documente Academic
Documente Profesional
Documente Cultură
An Analysis of different Estimation tools for parameter estimation in ellipse fitting to 2D edge images
Project Report
by
Rajmohan Asokan
Person No:50097503
E-Mail ID: rasokan@buffalo.edu
Department of Mechanical and Aerospace Engineering
Contents
1 Introduction
1.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.2 Problem Statement . . . . . . . . . . . . . . . . . . . . . . . . . .
2
2
2
3
3
4
9
11
18
6 Appendix
19
6.1 Ellipse Plotting . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
6.2 Least squares ellipse fitting based on minimization of algebraic
distance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
6.3 Numerically stable Direct Least Squares Fitting of Ellipses . . . 21
6.4 Approximate Maximum Likelihood Method . . . . . . . . . . . . 23
1
1.1
Introduction
Motivation
The field of parameter estimation has been a major research area in the field of
computer vision and graphics. Especially the parameter estimation of quadratic
curves fitted to data extracted through the application of computer vision techniques has been found important. For example, curve fitting has seen an important application in Industrial Machine Vision for parts identification and has
also been found to be useful in non destructive testing of hardware components.
The concentration of my masters research is on the application of estimation
techniques to estimate the parameters of the shape of solid objects. One such
application is in the field of archaeological sciences where excavated objects are
often pieces of a larger object which requires an efficient reconstruction.
1.2
Problem Statement
The least squares approach to fitting ellipses can be categorized into geometric fit
and algebraic fit[1]. The geometric fit is obtained by minimizing the sum of the
squares of the distances to the given points whereas the algebraic fit estimates
the parameters of the conic equation ax2 + 2bxy + cy 2 + 2dx + 2ey + f = 0 using
least squares.
2.1
(2.1)
(2.2)
(2.3)
(2.4)
Then
T
(Q
x + t)T A(Q
x + t) + bT (Q
x + t) + c = 0
(2.5)
(2.6)
x
(Q AQ)
x + (2t A + b )Q
x + t At + b t + c = 0
Therefore
x + bT x
x
T A
+ c = 0
(2.7)
Q is chosen in such a way that A = diag(1 , 2 ) and since the conic is ellipse t
is chosen such that b = 0.
1 x
21 + 2 x
22 + c = 0
(2.8)
c
a=
1
r
c
b=
2
(2.9)
(2.10)
(2.11)
The eigenvalues of 1 and 2 of the matrices A and A are same because QQT = I
and is invariant to coordinate transformations. The matrix A is slightly modified
to
a11 a12
A=
(2.12)
a12 a22
3
Therefore
A = a11 a22 a212 = 1 2
(2.13)
(2.14)
(2.15)
22
+
1
=
2
k
1 2
Which can be summarized into the form
p
k 2 = 2 1
k2 +
(2.16)
(2.17)
where is given by
(trace A)2
1
2 det A
Now the constrained least squares problem given by
=
(2.18)
xT Ax + bT x + c 0
(2.19)
21 + 22 = 1
(2.20)
(2.21)
where S is the coefficient matrix and the vectors v and w are given by
v = [b1 , b2 , c]T
(2.22)
(2.23)
(2.24)
Therefore
R22 w 0
(2.25)
which must satisfy the constraint kwk = 1 and this may be solved using singular
value decomposition
X
R22 = U
VT
(2.26)
where w is given by w = v3 and v is solved by
1
v = R11
R12 w
2.2
Results
(2.27)
Truth
3
2.5
1.5
0.5
0.5
1.5
2
4.5
3.5
2.5
1.5
0.5
2.5
2
1.5
1
0.5
0
0.5
1
1.5
2
4.5
3.5
2.5
1.5
0.5
260
240
220
200
180
160
140
160
180
200
220
240
260
168
167
166
165
164
175
180
185
190
195
200
205
210
215
220
225
The numerically stable version of Direct Least Squares by Halr and Flusser [2]
is the stable version of the method by Fitzgibbon et al [3]. The ellipse equation is
F (x, y) = ax2 + bxy + cy 2 + dx + ey + f = 0
(3.1)
and the ellipse constraint equation is given by b 4ac < 0. The coefficients
of the ellipse are a, b, c, d, e, f and x, y are the coordinates of the points.The
polynomial equation is the algebraic distance of points x, y to the ellipse.
The polynomial equation is given by
Fa (X) = X.a = 0
(3.2)
(3.4)
aT Ca = 1
(3.5)
and
x21
.
D1 =
.
x2N
x1 y1
.
.
xN yN
y1
.
.
yN
x1
.
D2 =
.
xN
Now, consider a scatter matrix, S, as
S
S = T1
S2
S2
S3
(3.6)
y12
.
.
2
yN
(3.7)
1
.
.
1
(3.8)
(3.9)
0
where C1 = 0
2
0
1
0
2
0 The coefficients of the ellipse can also be expressed as
0
a=
a1
a2
a
d
where a1 = b and a2 = e
c
f
By using a lagrange multiplier
S1 S2 a1
C
= 1
0
S2T S3 a2
(3.11)
0 a1
0 a2
(3.12)
(3.13)
S2T a1 + S3 a2 = 0
(3.14)
(3.15)
(3.16)
(3.17)
(3.18)
aT1 C1 a1 = 1
(3.19)
(3.20)
where
known as the reduced scatter matrix. The optimal solution corresponds to the
eigen vector a1 of matrix M .
Once the coefficients a of the ellipse is obtained, the ellipse is plotted with the
set of equations given to compute the aspects of the ellipse [A.1].
10
3.1
Results
2.5
1.5
0.5
0.5
1.5
2
4.5
3.5
2.5
1.5
0.5
Figure 7: Truth
The data from the true elliptic arc is taken as input to the Direct Least
Squares Method.
Fitted Ellipse & Data Points
3
Ellipse Fit
Data Points
2.5
1.5
0.5
0.5
1.5
2
4.5
3.5
2.5
11
1.5
0.5
1.5
0.5
0.5
1.5
2
4.5
3.5
2.5
1.5
0.5
1.5
0.5
0.5
1.5
2
4.5
3.5
2.5
1.5
0.5
12
260
240
220
200
180
160
140
160
180
200
220
240
260
180
178
176
174
172
170
168
175
180
185
190
195
200
205
210
215
220
225
13
(4.1)
(4.2)
(4.3)
And
The constraint function is represented in the same way as in section 2
tTp F tp > 0
(4.4)
where F is given by
0
0
2
F =
0
0
0
2 0 0 0
0 0 0 0
2 0 0 0
2 0 0 0
2 0 0 0
2 0 0 0
0
1
0
0
0
0
(4.5)
N
X
tTp An tp
tT B t
n=1 p n p
(4.6)
(4.7)
ux
ux T
cov
(4.8)
x
x
where cov provides the uncertainty in the edge data points. The authors use
a merit function for optimising the cost function CostAM L with respect to the
ellipse constraint and is given by
Bn =
M erit = CostAM L +
ktp k4
tTp F tp
(4.9)
4.1
Levenberg-Marquardt Algorithm
The LMA is used to optimise the merit function. Consider a function of tp given
by
!1
tTp An tp 2
(4.10)
rn (tp ) =
tTp Bn tp
for n = 1toN and
1
rN +1 (tp ) = 2
ktp k2
tTp F tp
(4.11)
(4.12)
(4.13)
The algorithm starts with an initial guess tp0 from Direct Least Squares Method
and the updation rule is given by
tp k+1 = tp k + k
"
#1
T
T
r(tp k )
r(tp k )
(tp k )
k =
+ k I
r(tp )
tp
tp
tp
(4.14)
(4.15)
where k is the step size. In the rarest of cases, the LMA could overshoot and
obtain tp k+1 that is not feasible. In that scenario, the algorithm predates to the
previous feasible tp k and initiates another interative procedure to obtain better
feasible estimate. The update condition becomes
tp k+1 = tp k + l k
(4.16)
The step length l is chosen such that both tp k+1 is feasible and the cost function
is reduced. The step size is chosen using the line search method. The k is given
by
#+
"
tp k T
tp k T tp k
r(tp k )
(4.17)
k =
tp
tp
tp
15
r(tp k )
tp
T
r(tp k )
+ I
tp
#1
tp k
tp
T
r(tp k )
(4.18)
4.2
Results
The below figures are to be compared to the true ellipse and the edge image
shown in the previous sections
Approximate Maximum Likelihood Fitted Ellipse & Edge Data Points
0.5
AMLEllipse Fit
Data Points
0
0.5
1.5
2
4.2
3.8
3.6
3.4
3.2
2.8
2.6
2.5
2
1.5
1
0.5
0
0.5
1
1.5
2
4.5
3.5
2.5
1.5
0.5
16
260
240
220
200
180
160
140
160
180
200
220
240
260
220
210
200
190
180
170
160
150
160
170
180
190
200
210
220
230
240
250
17
Conclusion
The three methods used in this project were able to fit the same ellipse to ample
dataset. But when the dataset from edge data became partial, the first method
failed to generate an ellipse while the other two methods generated identical
ellipses. While the partial synthetic data resulted in an ellipse from all three
methods. The ellipses generated by both DLS and AML are identical in a sense
that each of the method confirms the result of the other method. As noted in the
previous section, extreme cases were not checked to see how the two methods
differ. Also the methods performance can be better evaluated under uncertainty
by adding a synthetic white gaussian noise to the dataset and look at their
performance with respect to noisy dataset. With regard to the computation
time, both the first and second method are fast owing to their non-iterative
structure while the third method is little slower due to the iterative process
(LMA) involved. But from literature it is evident that the AML produces better
result than DLS even though not on par with ellipses generated by minimizing
the geometric distance. As far as the dataset in this project is considered, both
the DLS and AML performed fair while the least squares method fell short with
regard to partial datasets.
18
6
6.1
Appendix
Ellipse Plotting
cd bf
b2 ac
(6.1)
af bd
b2 ac
The semi-major and semi-minor axes are
s
2(af 2 + cd2 + gb2 2bdf acg)
p
a0 =
(b2 ac)[ (a c)2 + 4b2 (a + c)]
y0 =
s
b0 =
(6.2)
(6.3)
(6.4)
The angle between the major axis and the X-axis is given by
=0
(6.5)
= 0.5
(6.6)
ac
)
2b
ac
)
2b
(6.7)
(6.8)
(6.9)
(6.10)
19
6.2
function book(uv)
m = size(uv, 2);
x = uv(1, 1:end)';
y = uv(2, 1:end)';
S = [x, y, ones(m, 1), x.2, sqrt(2) * x .* y, y.2];
[Q, R] = qr(S);
R11 = R(1:3, 1:3);
R12 = R(1:3, 4:6);
R22 = R(4:6, 4:6);
[U, Sig, V] = svd(R22);
w = V(:, 3);
v = -R11 \ R12 * w;
A
= zeros(2);
A(1)
= w(1);
A([2 3]) = 1 / sqrt(2) * w(2);
A(4)
= w(3);
b
= v(1:2);
c
= v(3);
[Q, D] = eig(A);
Q = Q';
if prod(diag(D)) <= 0
disp('Not an ellipse');
end
t = -0.5 * (A \ b);
c h = t' * A * t + b' * t + c;
z = t;
a = sqrt(-c h / D(1,1));
b = sqrt(-c h / D(2,2));
alpha = atan2(Q(1,2), Q(1,1));
phi=alpha;
t = linspace(0,2*pi,1000);
xaxis=z(1)+a*cos(t)*cos(phi)-b*sin(t)*sin(phi);
yaxis=z(2)+a*cos(t)*sin(phi)+b*sin(t)*cos(phi);
plot(xaxis,yaxis,'LineWidth',2.5);
hold on
plot(uv(1,1:end)', uv(2,1:end)', 'ro')
title('Least Squares Ellipse Fitting-Bookstein Constraint');
legend('Linear LS Fit','Data Points')
end
20
6.3
clc;
clear all;
I=imread('binary image.jpg');
subplot(1,2,1),imshow(I);title('Original Image')
I=rgb2gray(I);
% figure,imshow(I);title('Binary Image')
the edge=edge(I,'canny');
subplot(1,2,2),imshow(the edge);title('Detected Edge');
[L,M]=find(the edge);
x=L;
y=M;
XY=[L M];
A=fit ellipse(L,M);
%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%
a = A(1);
b = A(2)/2;
c = A(3);
d = A(4)/2;
f = A(5)/2;
g = A(6);
%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%
if b==0 && a<c
phi=0;
elseif b==0 && a>c
phi=0.5*pi;
elseif b~=0 && a<c
phi = 0.5*acot((a-c)/(2*b));
elseif b~=0 && a>c
phi= (pi/2)+0.5*acot((a-c)/(2*b));
else
disp('No conditions satisfied, exiting');
end
delta = b2-a*c;
x0 = (c*d - b*f)/delta;
y0 = (a*f - b*d)/delta;
nom = 2*(a*f2 + c*d2 + g*b2 - 2*b*d*f - a*c*g);
s = sqrt((a-c)2 + (4*b2));
a prime = sqrt(nom/(delta* ( s -(a+c))));
b prime = sqrt(nom/(delta* (-s -(a+c))));
%%%%%%%
21
%%%%%%%
t = linspace(0,2*pi,1000);
a p = max(a prime, b prime);
b p = min(a prime, b prime);
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
xaxis=x0+a p *cos(t)*cos(phi)-b p *sin(t)*sin(phi);
yaxis=y0+a p *cos(t)*sin(phi)+b p *sin(t)*cos(phi);
figure;
subplot(1,2,1),plot(xaxis,yaxis);title('Fitted Ellipse')
subplot(1,2,2),plot(M,L,'ro');title('Data Points')
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
22
6.4
23
r ndel = (X n * t whi)/sqrt(num/den);
jacobian(j,:) = r ndel;
end
r n(end) = alpha*((norm(t whi))2/(t whi'*F* t whi));
Y = (eye(6)/(t whi'*F* t whi))-(F*((t whi'* t whi)/(t whi'*F* t whi)2));
r enddel = 2*alpha*Y* t whi;
jacobian merit(1,:) = r enddel;
jacobian total = [jacobian;jacobian merit];
hessian = jacobian total'*jacobian total;
cost(k) = r n'* r n;
if (~condition)
[jacobian,jacobian merit,r n,I,lambda,delta,lagrange,F,...
t p,cost,alpha,XY,len,k] = LMA(jacobian,jacobian merit,r n,I,...
lambda,delta,lagrange,F,...
t p,cost,alpha,XY,len,k);
else
[jacobian,jacobian merit,r n,I,lambda,delta,lagrange,F,...
t p,cost,alpha,XY,len,k,tolDelta,gamma] = LSA(jacobian,...
jacobian merit,r n,I,lambda,delta,lagrange,F,...
t p,cost,alpha,XY,len,k,tolDelta,gamma);
end
% To Check if the latest update overshot the barrier term
if (t p(:,k+1)' * F * t p(:,k+1) <= 0)
condition = true;
lambda = 0;
t p(:,k+1) = t p(:,k);
if (k > 1)
t p(:,k) = t p(:,k-1);
end
1;
% Check for various stopping criteria to end the main loop
elseif (min(norm(t p(:,k+1)-t p(:,k)),...
norm(t p(:,k+1)+t p(:,k))) < tolTheta && t p update)
cond continue = false;
elseif (abs(cost(k) - cost(k+1)) < tolCost && t p update)
cond continue = false;
elseif (norm(delta(:,k+1)) < tolDelta && t p update)
cond continue = false;
end
k = k + 1;
end
1;
24
params = t p(:,k);
params = params / norm(params);
25
A n l = u x l * u x l ';
cost 1 = cost 1 +
cost 2 = cost 2 +
num l/den l ;
num 2 l/den 2 l ;
end
26
t p updated = true;
cost(k+1) = cost 1;
t p(:,k+1) = t new 1 / norm(t new 1);
delta(:,k+1) = update 1(1:6)';
lambda = lambda l;
end
frac = 0.5;
while (true)
t new ls = t p ls + frac*update ls;
delta ls = frac*update ls;
frac = frac / 2 ;
27
cost = 0;
for q = 1:len
x ls = XY ls(1,q);
y ls = XY ls(2,q);
u x ls = [x ls2 x ls * y ls y ls2 x ls y ls 1]';
u xdel ls =[2* x ls y ls 0 1 0 0; 0 x ls 2* y ls 0 1 0]';
A n ls = u x ls * u x ls ';
cost = cost +
num 2 ls/den 2 ls ;
end
28
| | ...
References
[1] Walter Gander, Gene H Golub, and Rolf Strebel. Least-squares fitting of
circles and ellipses. BIT Numerical Mathematics, 34(4):558578, 1994.
[2] Radim Halr and Jan Flusser. Numerically stable direct least squares fitting of ellipses. In Proc. 6th International Conference in Central Europe on
Computer Graphics and Visualization. WSCG, volume 98, pages 125132.
Citeseer, 1998.
[3] Andrew Fitzgibbon, Maurizio Pilu, and Robert B Fisher. Direct least square
fitting of ellipses. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 21(5):476480, 1999.
[4] Zygmunt L Szpak, Wojciech Chojnacki, and Anton Van Den Hengel. Guaranteed ellipse fitting with the sampson distance. In Computer VisionECCV
2012, pages 87100. Springer, 2012.
[5] Zhengyou Zhang. Parameter estimation techniques: A tutorial with application to conic fitting. Image and vision Computing, 15(1):5976, 1997.
[6] Zygmunt L. Szpak et al.
Guaranteed ellipse fitting using sampson
distancehttps://sites.google.com/site/szpakz/source-code/guaranteedellipse-fitting-with-sampson-distance, December 2012.
29