实例介绍
ransac很详细的资料
Copyright of Marco Zuliani 2008-2011 Draft Copyright 2008 Marco Zuliani. Permission is granted to copy, distribute and or modify this document, under the terms of the gnu Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation: with no Invariant sections, no front -Cover Texts, and no Back-Cover Texts. A copy of the license is included in the appendix entitled" GNU Free Documentation License Contents 1 Introduction 2 Parameter estimation In presence of outliers 2.1 A Toy Example: Estimating 2D Lines 2.1.1 Maximum Likelihood estimation 2.2 Outliers. Bias and breakdown point 2.2.1 Outliers 2.2.2 Bias 2.2.3 Breakdown point 13 2.3 The Breakdown Point for a 2D Line Least Squares estimator 3 RANdom Sample And consensus 15 3.1 Introduction ..15 3.2 Preliminaries 3.3 RANSAC Overview 18 3.3.1 How many iterations? 3.3.2 Constructing the MsSs and Calculating q 3.3.3 Ranking the Consensus Set 23 3.4 Computational Complexit 25 3.4.1 Hypothesize Step 25 3.4.2 Test Step 25 3.4.3 Overall Complexity 25 3.5 Other RaNSaC Flavors 25 Copyright of Marco Zuliani 2008-2011 Draft 4 RANSAC at Work 28 4.1 The RANsaC Toolbox for Matlab octave 4.1.1 RANSAC.m 4.2 Some Examples using the RANSAC Toolbox 32 4.2.1 Estimating Lines 4.2.2 Estimating Planes 4.2.3 Estimating a Rotation Scaling and Translation 4.2.4 Estima ting homographies 4.3 Frequently Asked Questions 4.3.1 What, is the“ right” value of a? ....44 4.3.2 I want to estimate the parameters of my favourite model. What should i do? 4.3.3 How do i use the toolbox for image registration purposes? 4.3.4 Why the behaviour of RANSAC is not repeatable? 45 4.3.5 What should i do if i find a bug in the toolbox? 4.3.6 Are there any other RANSAC routines for Matlab? 45 A Notation 46 B Some Linear Algebra Facts 47 B. 1 The Singular Value decomposition 47 B 2 Relation Between the SvD Decomposition and the eigen Decomposition 48 B3 Fast Diagonalization of Symmetric 2 x 2 Matrices B 4 Least Square Problems Solved via SVD B.4.1 Solving A0-6 B.4.2 Solving A0=0 subject to 8=1 51 C The Normalized Direct Linear Transform(nDLT) Algorithm 53 C 1 Introduction C 2 Point, normalization 54 C.3 A Numerical Example C 4 Concluding Remarks about the Normalized DLT AlgorithIn d Some Code from the ransac toolbox 65 D 1 Function Template D 1.1 MSS Validation Copyright of marco zuliani 2008-2011 Draft D.1.2 Parameter Estimation D 1.3 Parameter validation D 1. 4 Fitting error D 2 Source Code for the Examples D.2.1 Line estimation D 2.2 Plane estimation D 2.3 RST Estimation D 2. 4 Homography estimation E GNU Free Documentation license 94 1. APPLICABILITY AND DEFINITIONS 9 2. VERBATIM COPYING 3. COPYING IN QUANTITY 4. MODIFICATIONS 5. COMBINING DOCUMENTS 6. COLLECTIONS OF DOCUMENTS 7. AGGREGATION WITH INDEPENDENT WORKS 8. TRANSLATION 9. TERMINATION 97 10. FUTURE REVISIONS OF THIS LICENSE 97 ADDENDUM: How to use this License for your documents References 98 Introduction This tutorial and the toolbox for Matlab& Octave were mostly written during my spare time( with the loving disapproval of my wife), starting from some routines and some scattered notes that i reorganized and expanded after my Ph.D. years. Both the tutorial and the toolbox are supposed to provide a simple and quick way to start experimenting the raNsac algorithm utilizing Matlab& octave The notes may seem somewhat heterogeneous, but they collect some theoretical discussions and practical considerations that are all connected to the topic of robust estimation, more specifically utilizing the raNSaC algorithm Despite the fact that several users tested this package and sent me their invaluable feedback, it is possible(actually very probable)that these notes still contain ty or even plain mistakes. Similarly, the ransac toolbox may contain all sorts of bugs. This is why I really look forward to receive your comments: compatibly with my other commitments i will try to improve the quality of this little contribution in the fervent hope that somebody might find it useful I want, to thank you here all the persons that have been intensively using the toolbox and provided me with precious suggestions, in particular Dong Li, Tamar Back, Frederico Lopes, ayanth Nayak, David Portabella clotet, Chris Volpe, Zhe Zang. Ali Kalhili, George Polchin Los Gatos. CA ovember Marco zuli Parameter Estimation In Presence of Outliers This chapter introduces the problem of parameter estimation when the measurements are contaminated by outliers. To motivate the results that will be presented in the next chapters and to understand the power of RANSaC, we will study a simple problem: fitting a 2D line to a set of points on the plane. Despite its simplicity, this problem retains all the challenges that are encountered when the models used to explain the measurements are more complex 2.1 A Toy Example: Estimating 2D Lines Consider a set of N points D=d1,., dNC R and suppose we want to estimate the best line 01x1-2x2+03=0 that fits such points. For each point we wish to minimize a monotonically increasing function of the absolute value of the signed error (d;6) 1m1+622+63 B+份 (21) The sign of the error(2. 1)accounts for the fact that the point lies either on the left or on the right semi-plane determined by the line. The pa- Figure 2. 1: Line fitting example rameter vector ber describes the line accord- ing the implicit representation 0121+02. 2+03=0 this is the model M that we will Further details regarding the estimation of 2D lines can be found in Section 4.2.1 Copyright of Marco Zuliani 2008-2011 Draft use to fit the measurements). Note that the length of 0 is immaterial. This type of fitting is also known as orthogonal regression, since the distances of the sample points from the line are evaluated computing the orthogonal projection of the measurements on the line itself. Other type of regression can also be used, e.g. minimizing the distance of the projection of the measurements along the y axis(however such an approach produces an estimate of the parameters that is not invariant with respect a rotation of the coordinate system) 2.1.1 Maximum Likelihood estimation Imagine that the fitting error is modeled as a gaussian random variable with zero nean and standard deviation on, i.e. em(d: 0)NN(O, on). The maximum likelihood approach aims at finding the parameter vector that maximizes the likelihood of the joint error distribution defined as: L(0) pedi; 0),.,eM(dN; 0). In the previous expression, p indicates the joint probability distribation function(pdf) of the errors. Intuitively, we are trying to find the parameter vector that maximizes the probability of observing the signed errors eM(di; 8). Therefore we would like to calculate the estimate 8=argrnax L(e) To simplify this maximization problem, we assume that the errors are independent (an assumption that should be made with some caution, especially in real life scenar los... )and we consider the log-likelihood C*(b)def log L(a). This trick allows us to simplify some calculations without affecting the final result, since the logarithm is a monotonically increasing function(and therefore the maximizer remains the same Under the previous assumptions we can write c()= log ip[em(;)=∑ugws(t;=∑(og eM(di; 0 here ZG=V2TOn is the normalization constant for the Gaussian distribution Therefore the maximum likelihood estimate of the parameter vector is given by N 11/eM1(d;) N argmax ∑(g argmin ∑ 11(d2;0)\2 (22) 6 Copyright of Marco Zuliani 2008-2011 Draft The function to be minimized is convex 2 with respect to 0. and therefore the minimizer can be determined utilizing traditional iterative descent, methods [Ber 99, Lue03(see Figure 2.2(a) for a numeric example and Section 4.2. 1 for further details regarding the calculations). Note that(2. 2)is nothing but the familiar least square estimator. This is a very well known results: for a more extensive treatment of this subject refer to [Men95 We want to emphasize that the assumption of (independent) Gaussian errors implies that the probability of finding a point, that supports the model with a residua. larger than 30n is less than 0.3%. We may be interested in finding the expression of the maximum likelihood estimate when the pdf of the error is not Gaussian. In particular we will focus our attention on the Cauchy-Lorentz distribution peM 1+l(sM(d° where ZC = v2TOn is the normalization factor The notation used in the previous formula should not be misleading: for this distribution the mean, the variance (or the higher moments) are not defined. The value for the so called scale parameter has been chosen to be consistent with the expression obtained in the Gaussian case It is important, to observe that the Cauchy-Lorentz distribution is characterized by heavier tails than the gaussian distribution. Intuitively, this implies that the probability of finding a large error is higher if the distribution is cauchy-lorentz than if the distribution is Gaussian(see Figure 2. 2(b). If we derive the maximum likelihood estimate for this distribution we obtain 6 1/eM(d2;) argmax ∑ log[l+ 6 2 ∑g(1 eM(d; 0)Y gmin (23) Also in this case the function to be minimized is convex with respect to 0 and there- fore the minimizer can be computed utilizing traditional iterative descent methods A functionf: R-R is convex if its graph betweell J1 and :2 lies below any segment that connect f(r1 to f(a2 Formally speaking, if VA E 0,1 we have that f(x1+ Af(1)+(1-A)f(a2). This notion gcncralizcs straightforwardly for vector functions(whose graph essentially looks like a cereal bowl) 【实例截图】
【核心代码】
标签:
小贴士
感谢您为本站写下的评论,您的评论对其它用户来说具有重要的参考价值,所以请认真填写。
- 类似“顶”、“沙发”之类没有营养的文字,对勤劳贡献的楼主来说是令人沮丧的反馈信息。
- 相信您也不想看到一排文字/表情墙,所以请不要反馈意义不大的重复字符,也请尽量不要纯表情的回复。
- 提问之前请再仔细看一遍楼主的说明,或许是您遗漏了。
- 请勿到处挖坑绊人、招贴广告。既占空间让人厌烦,又没人会搭理,于人于己都无利。
关于好例子网
本站旨在为广大IT学习爱好者提供一个非营利性互相学习交流分享平台。本站所有资源都可以被免费获取学习研究。本站资源来自网友分享,对搜索内容的合法性不具有预见性、识别性、控制性,仅供学习研究,请务必在下载后24小时内给予删除,不得用于其他任何用途,否则后果自负。基于互联网的特殊性,平台无法对用户传输的作品、信息、内容的权属或合法性、安全性、合规性、真实性、科学性、完整权、有效性等进行实质审查;无论平台是否已进行审查,用户均应自行承担因其传输的作品、信息、内容而可能或已经产生的侵权或权属纠纷等法律责任。本站所有资源不代表本站的观点或立场,基于网友分享,根据中国法律《信息网络传播权保护条例》第二十二与二十三条之规定,若资源存在侵权或相关问题请联系本站客服人员,点此联系我们。关于更多版权及免责申明参见 版权及免责申明
网友评论
我要评论