next up previous contents
Next: Kalman Filtering Technique Up: Parameter Estimation Techniques: A Previous: Gradient Weighted Least-Squares Fitting

Bias-Corrected Renormalization Fitting


Consider the biquadratic representation of an ellipse:


Given n noisy points tex2html_wrap_inline3043 ( tex2html_wrap_inline3045 ), we want to estimate the coefficients of the ellipse: tex2html_wrap_inline3047 . Due to the homogeneity, we set tex2html_wrap_inline3049 .

For each point tex2html_wrap_inline2849 , we thus have one scalar equation:




Hence, tex2html_wrap_inline3053 can be estimated by minimizing the following objective function (weighted least-squares optimization)


where tex2html_wrap_inline3055 's are positive weights.

Assume that each point has the same error distribution with mean zero and covariance tex2html_wrap_inline3057 . The covariance of tex2html_wrap_inline2953 is then given by




Thus we have


The weights can then be chosen to the inverse proportion of the variances. Since multiplication by a constant does not affect the result of the estimation, we set


The objective function (16) can be rewritten as


which is a quadratic form in unit vector tex2html_wrap_inline3053 . Let


The solution is the eigenvector of tex2html_wrap_inline3063 associated to the smallest eigenvalue.

If each point tex2html_wrap_inline3043 is perturbed by noise of tex2html_wrap_inline3067 with


the matrix tex2html_wrap_inline3063 is perturbed accordingly: tex2html_wrap_inline3071 , where tex2html_wrap_inline3073 is the unperturbed matrix. If tex2html_wrap_inline3075 , then the estimate is statistically unbiased; otherwise, it is statistically biased, because following the perturbation theorem the bias of tex2html_wrap_inline3053 , i.e., tex2html_wrap_inline3079 .

Let tex2html_wrap_inline3081 , then tex2html_wrap_inline3083 . We have


If we carry out the Taylor development and ignore quantities of order higher than 2, it can be shown that the expectation of tex2html_wrap_inline3085 is given by


It is clear that if we define


then tex2html_wrap_inline3087 is unbiased, i.e., tex2html_wrap_inline3089 , and hence the unit eigenvector tex2html_wrap_inline3053 of tex2html_wrap_inline3087 associated to the smallest eigenvalue is an unbiased estimate of the exact solution tex2html_wrap_inline3095 .

Ideally, the constant c should be chosen so that tex2html_wrap_inline3089 , but this is impossible unless image noise characteristics are known. On the other hand, if tex2html_wrap_inline3089 , we have


because tex2html_wrap_inline3103 takes its absolute minimum 0 for the exact solution tex2html_wrap_inline3095 in the absence of noise. This suggests that we require that tex2html_wrap_inline3107 at each iteration. If for the current c and tex2html_wrap_inline3053 , tex2html_wrap_inline3113 , we can update c by tex2html_wrap_inline3117 such that


That is,


To summarize, the renormalization procedure can be described as:

  1. Let c=0, tex2html_wrap_inline3121 for tex2html_wrap_inline3045 .
  2. Compute the unit eigenvector tex2html_wrap_inline3053 of


    associated to the smallest eigenvalue, which is denoted by tex2html_wrap_inline3127 .

  3. Update c as


    and recompute tex2html_wrap_inline3055 using the new tex2html_wrap_inline3053 .

  4. Return tex2html_wrap_inline3053 if the update has converged; go back to step 2 otherwise.

Remark 1: This implementation is different from that described in the paper of Kanatani [9]. This is because in his implementation, he uses the N-vectors to represent the 2-D points. In the derivation of the bias, he assumes that the perturbation in each N-vector, i.e., tex2html_wrap_inline3137 in his notations, has the same magnitude tex2html_wrap_inline3139 . This is an unrealistic assumption. In fact, to the first order, tex2html_wrap_inline3141 thus tex2html_wrap_inline3143 . Hence, tex2html_wrap_inline3145 , where we assume the perturbation in image plane is the same for each point (with mean zero and standard deviation tex2html_wrap_inline3147 ).

Remark 2: This method is optimal only in the sense of unbiasness. Another criterion of optimality, namely the minimum variance of estimation, is not addressed in this method.

next up previous contents
Next: Kalman Filtering Technique Up: Parameter Estimation Techniques: A Previous: Gradient Weighted Least-Squares Fitting

Zhengyou Zhang
Thu Feb 8 11:42:20 MET 1996