用户名: 密码: 验证码:
基于目标特征的单目视觉位置姿态测量技术研究
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
单目视觉位置姿态测量系统是视觉测量系统的重要组成部分。与双目视觉相比,它具有结构简单、测量精度高等优点,被交会对接、机器人、无人机、自动检测设备等广泛采用。为解决视觉位姿测量在空间交会对接工程应用中存在的问题,本课题开展了基于目标特征的单目视觉姿态测量系统的研究,包括基于合作目标的位姿测量和基于非合作目标的位姿测量两种。
     国内外交会对接中通常采用基于合作目标的位姿测量系统,测量系统主要由相机、合作目标、特征提取和位姿解算算法、激光照明和控制系统等部分组成,其中特征提取和位姿解算算法两部分是系统的核心。分析了基于合作目标位姿测量中特征提取的原理,试验验证了特征提取原理的可行性;建立了合作目标位姿测量精度模型,分析了三点位姿测量中影响测量精度的6个主要因素,包括测量距离、特征点之间的位置关系、图像上特征点的提取精度、相机的量化误差、相机内参数标定误差、位姿解算算法,分析了各因素对总体精度的影响方式和影响程度。利用重投影的方法验证了测量系统的精度,在2.4米处测量系统重投影的像素误差为(0.05,0.05),对应的位置误差约为0.12mm。
     合作目标提高了测量系统的稳定性和精度,但是限制了测量系统的应用范围,例如在对没有预先安装合作目标的飞行器进行在轨维护和升级等。为了将单目视觉位姿测量系统应用到上述情况中,本课题开展了基于非合作目标的位姿测量系统。测量系统的开发难度主要是特征提取和位姿解算算法两个方面。非合作目标位姿测量中没有激光照明系统配合提取特征点,因此特征提取难度加大、提取精度降低。本文根据使用目标的特性提出了利用全局信息提取目标特征的方法,利用特征拟合等方法提取目标的隐含信息,实现了图像中特征和目标上特征的匹配。非合作目标上的特征点没有按照测量需求排列,而且能够提取的特征类型不固定,所以位姿解算方法不固定,需要根据不同的情况设计相应的位姿解算算法。根据课题中使用的目标上能够提取的特征,提出了利用目标的圆心和半径等信息计算相对位姿的算法,然后分析了算法的误差来源,提出了误差修正方法。在理论分析的基础上实验验证了算法的正确性和稳定性,利用重投影的方法检验了算法的精度,在2.8米处测量系统重投影的像素误差为(0.4795,0.5606),对应的位置误差为1.48mm。
     精度分析和误差分配是测量系统开发的重要组成部分,精度分析能够指导系统设计、误差分析、误差分配等环节。利用更高精度的测量仪器分析视觉测量系统的误差,试验中使用全站仪标定相机参数和分析测量系统精度。通过试验分析得到基于合作目标的位姿测量在测量距离为2.4米左右的位置测量精度为2.6‰,三个坐标轴上的姿态角测量精度优于0.2°;基于非合作目标的位姿测量在相对距离为2.8米左右的位置测量精度为6.0‰,姿态角测量精度优于0.3°。
     本课题完成了基于单目视觉测量的框架性理论和实验验证,利用全站仪进行了相机标定和测量精度分析工作:建立了合作目标位置测量的误差模型,能够指导系统设计和误差分配;提出了提取图像中非合作目标特征的方法和利用非合作目标特征解算位姿的方法,分析了方法的误差、完善了方案,试验验证了方法的正确性和稳定性;提出了测量系统精度分析方案,标定了相机的内外参数,分析了合作目标和非合作目标测量系统的精度。
Mono-vision measuring system is an essential consisting part of visionmeasuring region. Mono-vision measuring system is widely adopted by RVD(Rendezvous and Docking), auto detection machine for its simplicity, highprecision. In order to extend the application of mono-vision system to robotic arm,robotic hand, RVD, the research is carried out based on both cooperative target andnon-cooperative target.
     Measuring system based on cooperative target is consisted by camera,cooperative target, feature extraction algorithm, position and orientation resolvingalgorithm, laser system and control system, among all of which, feature extractionand pose resolving algorithms are the most important parts. Image contains target issubtracted by background image without the target when feature points areextracted during RVD is in operation. Pose resolving algorithm by three featurepoints is regularly adopted by cooperative target measuring system. Precision ofthree points algorithm is influenced mainly by6aspects, such as measurementdistant, feature points character, and precision of feature point extraction.
     There is no cooperative target in many working conditions, for example, Hubblespace telescope maintenance on orbit, size measurement of workpiece, and licenseplate number detection. In order to extend pose measuring system to thoseconditions that pose measurement system based on non-cooperative target isdeveloped. The measuring system is consisted by camera, non-cooperative target,feature extraction algorithm, and pose resolving algorithm. Dimensionalinformation of target could not be reconstructed by mono-vision system, so thatdimensions of target should be known information. While precision of featureextraction is the most influencing factor, extraction of feature points onnon-cooperative is considerably tougher and less precise than cooperative target.Pose resolving algorithm is another intractable problem for that feature type is notconsistent. Feature points, feature lines, curves, and regions are used for posecalculation, so that the pose resolving algorithms should be capable to get poseinformation from those features. Usually, the pose resolving algorithm is varying indifferent conditions.
     Precision analysis and accuracy distribution are two important parts after measuring system is developed. Camera calibration, which is a preliminaryprocedure for precision analysis and accuracy distribution, is consisted by intrinsicand extrinsic parameters calibration. Intrinsic parameter is composed by focallength, center position of image, size of pixel on CCD or CMOS, and extrinsicparameter is composed by position and orientation relationship between cameracoordinate and reference coordinate. Total station is adopted for the cameracalibration and precision analysis, three corner cubes are imposed on both the targetand camera frame as the target of total station. Position accuracy of cooperativetarget is2.6‰when the measurement distance is about2.4meters, and the angleaccuracy is better than0.2°. Position accuracy of non-cooperative targetmeasurement is6.0‰when the measurement distance is about2.8meters, andangle accuracy is better than0.3°.
     Camera model, camera calibration model are included as preliminaryprocedures for pose measurement. Pose calculation algorithms by three featurepoints, four points, and N points are deduced, and feature points extractionmechanism for cooperative target is presented, and image processing procedure arealso analyzed. At the same time, feature extraction methods for points, lines, andcurves, and methods for finding correspondence between features extracted fromimages and on the model are presented for non-cooperative target. Methods to poseresolving for non-cooperative target by varying feature types are developed. Totalstation is adopted for camera calibration and accuracy analysis. The frame theoriesof mono-vision measurement are deduced, experiments are carried out, andprecision analysis is presented.
引文
[1]王鹏,孙长库,张子淼.单目视觉位姿测量的线性求解[J].仪器仪表学报,32(5),2011.5
    [2]王保丰,航天器交会对接和月球车导航中视觉测量关键技术研究与应用[D].中国人民解放军信息工程大,2007.4
    [3]傅丹,基于直线特征的空间目标三维结构重建和位姿测量方法研究[D].国防科技大学,2008,长沙
    [4]尚洋,基于视觉的空间目标位置姿态测量方法研究[D].国防科学技术大学研究生院,2006.4
    [5]晁志超,伏思华等.单目摄像机-激光测距传感器位姿测量系统[J].光学学报,2011,31(3),0312001:1-7
    [6]林来兴.四十年空间交会对接技术的发展[J].航天器工程,2007,16(4):70-77
    [7]安凯,马佳光.一种用于交会对接的CCD快速测量方法[J].2011,38(6):1-6
    [8]郭永.交会对接地面仿真系统控制方法研究与实现[D].哈尔滨工业大学,2010,哈尔滨
    [9]王福有.空间交会对接视觉测量方法研究与实现[D].哈尔滨工业大学,2007,哈尔滨
    [10]周鑫,朱枫.关于P3P问题解的唯一性条件的几点讨论[J].计算机学报,2003,26(12):1696-1671
    [11]郝颖明,朱枫,欧锦军.目标位姿测量中的三维视觉方法[J].中国图象图形学报,2002,7(12):1247-1251
    [12]郝颖明,朱枫,欧锦军等. P3P位姿测量方法的误差分析[J].计算机工程与应用,2008,44(18):239-242
    [13]朱枫,郝颖明等.合作目标位姿对视觉位姿测量精度的影响分析[J].仪器仪表学报,2007,28(4):130-134
    [14]徐刚锋.空间交会对接中光学测量技术研究[D].国防科技大学,2002,长沙
    [15] M.R.Leinz, et.al. Orbital express autonomous rendezvous and capture sensor system flighttest results [J]. Proc.of SPIE,2008,69580A:1-13
    [16] S.Stamm, P.Motaghedi. Orbital express capture system: concept to reality [J]. Proc.ofSPIE,2004
    [17] C.Chouinard. et.al. Orbital express mission operations planning and resourece managementusing ASPEN [J]. Proc.of SPIE,2008,695806
    [18] R.B.Friend. Orbital Express program summary and mission overview [J]. Proc. of SPIE,2008,695803
    [19] J.Shoemaker, M.Wright. Orbital express space operations architecture program [J]. Proc. ofSPIE,2003,5088
    [20] P.Motaghedi, S.Stamm.6DOF testing of the orbital express capture system [J]. Proc. ofSPIE,2005,5799
    [21] A.Ogilvie, et.al. Autonomous robotic operations for on-orbit satellite servicing [J]. Proc. ofSPIE,2008,6958
    [22] D.A.Whelan, et.al. The DARPA orbital express program: effecting a revolution inspace-based systems [J]. Proc. of SPIE,2000,4136
    [23] C.R.Randall, et.al. NextSat on-orbit experience [J]. Proc. of SPIE,2008,6958
    [24] P.Motaghedi. On-orbit performance of the orbital express capture system [J]. Proc. ofSPIE,2008,6958
    [25] C.Lyn, P.Eng et.al. Computer vision systems for robotic servicing of the Hubble spacetelescope [J]. AIAA,2007-6259
    [26] M. Michael, S. Gurpartap. Rendezvous and docking for space exploration [J].1st SpaceExploration Conference: Continuing the Voyage of Discovery, Orlando, Florida,2005:1-10
    [27] P. Singla et al. Adaptive output feedback control for spacecraft rendezvous and dockingunder measurement uncertainty [J]. Journal of Guidance, Control, and Dynamics,2006,29(4),892-902
    [28] C.F. Lillie. On-orbit assembly and servicing for future space observatories [J]. AmericanInstitute of Aeronautics and Astronautics, Space2006San Jose, CA,2006:1-12
    [29] P.A. Lightsey et al. James Webb space telescope: large deployable cryogenic telescope inspace. Optical Engineering,2012,51(1),011003
    [30] D.J. Benford et al.,“Mission concept for the single aperture far-infrared (SAFIR)observatory”, Astrophysics and Space Science,2004,294:177-212
    [31] P.Jasiobedzki, S.Se, et.al. Autonomous satellite rendezvous and docking using lidar andmodel based vision [J]. Proc.of SPIE Vol.5798,2005:54-65
    [32] J.M.Galante et.al.Pose measurement performance of the argon relative navigation sensorsuite in simulated flight conditions [J]. AIAA:1-25
    [33] J.M.Kelsey, et.al. Vision-based relative pose estimation for autonomous rendezvous anddocking [J]. IEEE AC paper1333,2002:1-20
    [34] M.Balch, D.Tandy. A pose and position measurement system for the Hubble space telescopeservicing mission [J]. Proc.of SPIE Vol.6555,2007:1-8
    [35] S.Augenstein. Monocular pose and shape estimation of moving targets, for autonomousrendezvous and docking [D]. Stanford University,2011.
    [36] T.Boge, et.al. Using robots for advanced redezvousn and docking simulation [J].SESP,2012:1-8
    [37] R.T.Howard, T.C.Bryan, et.al. Active sensor system for automatic rendezvous and docking[J]. SPIE,3065,0277-786X,1997:106-115
    [38] R.T. Howard et al.,“Automatic rendezvous and docking system test and evaluation”, LaserRader Technology and Application П,1997,3065,131-139
    [39] R.T. Howard, A.S. Johnston, T.C. Bryan, M.L. Book,“Simulation and ground testing withthe AVGS”, Proceedings of SPIE,2005,5799, doi:10.1117/12.604132
    [40] N.A.S. Johnston, R.T. Howard, D.W. Watson,“X-Ray calibration facility/advanced videoguidance sensor test”, NASA/TM,2004,213393
    [41] S.R.Granade. Rendezvous and docking sensor suite [J]. SPIE,2006,62200J:1-8
    [42] R.T. Howard et al.,“Advanced video guidance sensor development testing”, SpacebomeSensors, Proceedings of SPIE,2004,5418:50-60
    [43] S.V.Winkle. Advanced video guidance sensor project summary [J]. Proceeding of SPIE,2004,5418:10-20
    [44] R.T. Howard et al.,“Next generation advanced video guidance sensor development andtest”, AAS09-064,2009:1-13
    [45] D. Zimpfer, P. Kachmar and S. Tuohy,“Autonomous rendezvous, capture and in-spaceassembly: past, present and future”,1st Space Exploration Conference: Continuing theVoyage of Discovery, Orlando, Florida,2005:1-12
    [46] Leonard David. Star-crossed fate for Hubble [J]. Aerospace America,2005,3:38-43
    [47] C.F. Lillie,“On-orbit assembly and servicing for future space observatories”, AmericanInstitute of Aeronautics and Astronautics, Space2006San Jose, CA,2006:1-12
    [48] R.C.Gonzalez, R.E.Woods, S.L.Eddins. Digital Image Processing Using MATLAB[M].北京:电子工业出版社,2008:334-425
    [49] D.A.Forsyth,J.Ponce. Computer Vision: A Modern Approach[M].北京:电子工业出版社,2012:4-29
    [50] R.Szeliski. Computer Vision: Algorithms and Applications [M]. Springer,2010:32-60
    [51] M.S.Nixon, A.S.Aguado.特征提取与图像处理[M].北京:电子工业出版社,2011:288-308
    [52] D.F. Dementhon and L.S. Davis. Model-based object pose in25lines of codes [J]. InternalJournal of Computer Vision,1995,15,123-141
    [53] S. Gold et al.. New algorithms for2D and3D point matching: pose estimation andcorrespondence [J]. Pattern Recognition,1998,31(8),1019-1031
    [54] P. David et al.. Softposit: simultaneous pose and correspondence determination [J].International Journal of Computer Vision,2004,59(3),259-284
    [55] G.Wei, S.Ma. Implicit and explicit camera calibration: theory and experiments [J]. IEEETransactions on PAMI,1994,16(5):469-480
    [56] I.Miyagawa, H.Arai, H.Koike. Simple camera calibration from a single image using fivepoints on two orthogonal1-D objects [J]. IEEE Transactions on ImageProcessing,2010,19(6):1528-1538
    [57] Z.Zhang. Camera calibration with one-dimensional objects [J].IEEE Transaction on PAMI,2004,26(7):892-899
    [58] R.Y.Tsai. A versatile camera calibration technique for high-accuracy3D machine visionmetrology using off-the-shelf TV cameras and lenses [J]. IEEE Journal of Robotics andAutomation,1987,3(4):323-344
    [59]任行行.空间交会接近视觉测量方法研究[D].哈尔滨:哈尔滨工业大学,2008
    [60] E.Hecht. Optics[M].北京:高等教育出版社,2005:282-310
    [61]毛文炜.光学工程基础(一)[M].清华大学出版社,2006:23-43
    [62]张以谟.应用光学[M].北京:电子工业出版社,2011:212-264
    [63] R.T.Howard,M.L.Book.Improved video gudidance sensor for automateddocking[J].SPIE,2466:118-127
    [64]杨君,张涛.星点质心亚像元定位的高精度误差补偿[J].光学精密工程,2010,18(4):1002—1010
    [65]邾继贵,邹剑,林嘉睿等.摄影测量图像处理的高精度误差补偿法[J].光学学报,2012,32(9):0912004
    [66] C.C.Liebe. Star trackers for attitude determination [J]. IEEE AES Systems Magazine, June1995:10-16
    [67] Quan L, Lan Z D. Linear n-point camera pose determination [J]. IEEE Transactions onPAMI,1999,21(8):774-780
    [68] M.A.F ischler and R.C. Bolles,“Random sample consensus: a paradigm for model fittingwith applications to image analysis and automated cartography”, Graphics and ImageProcessing,1981:24(6),381-395
    [69] R.M. Haralick, C. Lee, K. Ottenberg, M. N lle,“Review and analysis of solutions of thethree point perspective pose estimation problem”, International Journal of Computer Vision,1994,13(3),331-356
    [70] T.Moriya, et.al. Solving the rotation-estimation problem by using the perspectivethree-point algorithm [J]. IEEE,2000,1063-6919
    [71] L.Kneip, et.al. A novel parametrization of the perspective-three-point problem for a directcomputation of absolute camera position and orientation [DB].Autonomous Systems Lab,ETH Zurich:2969-2705
    [72] X. Gao, X. Hou, J. Tang, H. Cheng,“Complete solution classification for theperspective-three-point problem”, IEEE Transaction on PAMI,2003,25(8),930-943
    [73] R.M.Haralick, et.al. Analysis and solutions of the three point perspective pose estimationproblem [J].IEEE,1991,CH2983:592-598
    [74] W.J.Wolfe, et.al. The perspective view of three points [J].IEEE,1991,0162-8828:66-73
    [75] C.Chen, W.Chang. On pose recovery for generalized visual sensors[J].IEEE Transactions onPAMI,2004,26(7):848-861
    [76] Z.Y.Hu, F.C.Wu. A note on the number of solutions of the noncoplanar P4P problem [J].IEEE Transactions on PAMI,2002,24(4):550-555
    [77] G.Schweighofer, A.Pinz. Robust pose estimation from a planar target [J]. IEEE Transactionson PAMI,2006,28(12):2024-2030
    [78] X DU, B LIANG, W XU, et al. Pose measurement of large non-cooperative satellite basedon collaborative cameras [J]. Acta Astronautica,2011,68:2047–2065
    [79]苗锡奎,朱枫,丁庆海等.基于星箭对接环部件的飞行器单目视觉位姿测量方法[J].光学学报,2013,33(4):0412006
    [80]廉明,富宏亚,付森.基于Hough变换和颜色特征的预瞄准技术研究[J].光学学报,2009,29(9):2385-2389
    [81] J. Borovicka,“Circle detection using hough transforms”, documentation,2003:1-28
    [82] D. Umbach, K.N. Jones. A few methods for fitting circles to data [J]. IEEE Transactions onInstrumentation and Measurement,2003,52(6),1881-1885
    [83] I.L. Guevara et.al.. Robust fitting of circles arcs [J]. J Math Imaging Vis,2011,40,147-161
    [84]赵汝进等.基于去离群点策略提高目标位姿测量精度[J].光学学报,2009,29(9):2463-2467
    [85] Z. Zhang et al.,“Improved iterative pose estimation algorithm using three-dimensionalfeature points”, Optical Engineering,2007,46(12),127202-1-8
    [86] D.Lowe. Distinctive image features from scale-invariant keypoints [J]. InternationalJournal of Computer Vision,2004,60(2):91-110
    [87] F.Li, D.Q. Tang, N.Shen. Vision-based pose estimation of UAV [J]. Procedia Engineering,2011,15:578-584
    [88] A.Ansar, K.Daniilidis. Linear pose estimation from points or lines [J]. IEEE Transactions onPAMI,2003,25(5):578-590
    [89] L.J. Qin, F.Zhu. A new method for pose estimation from line correspondence [J]. ActaAutomatica Sinica,2008,34(2):130-135
    [90] P.David, D.Dementhon, et.al. Simultaneous pose and correspondence determination usingline features [J]. CVPR,2003:1-8
    [91] G.H. Wang, J.Wu, Z.Q. Ji. Single view based pose estimation from circle or parallel lines [J].Pattern Recognition Letters,2008,29:977-985
    [92]赵汝进等.一种基于直线特征的单目视觉位姿测量方法[J].光电子激光,2010,21(6):894-897
    [93]赵汝进等.一种基于特征点间线段倾角的姿态测量方法[J].光子学报,2010,39(2):320-324
    [94] F. Li, D. Tang, N. Shen,“Vision-based pose estimation of UAV from line correspondences”,Procedia Engineering,2011,15,578-584
    [95] A.S.Aguado, E.Montiel, M.S.Nixon. Invariant characterization of the Hough transform forpose estimation of arbitrary shapes [J]. Pattern Recognition,2002,35:1083-1097
    [96]张磊等.一种基于圆特征的单目视觉位姿测量算法的改进[J].合肥工业大学学报,2009,32(11):1669-1673
    [97] R.Basri, D.W.Jacobs. Recognition using region correspondences [J]. International Journal ofComputer Vision,1997,25(2):145-166
    [98] D.Jacobs, R.Basri.3-D to2-D pose determintation with regions [J]. International journal ofcomputer vision,1999,34(2/3):123-145
    [99] R.Basri, D.W.Jacobs. Projective alignment with regions [J]. IEEE Transactions on PAMI,2001,23(5):519-527
    [100] Z.Qi. Pose estimation using points to regions correspondence [D]. University ofWyoming,2008
    [101] L. Li, Z. Deng, B. Li, X. Wu,“Fast vision-based pose estimation iterative algorithm”, Optik–Int.J.Light Eletron Opt.,2012,Doi:10.1016
    [102] S.J. Zhang, F. Liu, X. Cao, L. He,“Monocular vision-based two-stage iterative algorithmfor relative position and attitude estimation of docking spacecraft”, Chinese Journal ofAeronautics,2010,23,204-210
    [103] H. Hmam, I. Kim,“Optimal non-iterative pose estimation via convex relaxation”, Imageand Vision Computing,2010,28,1515-1523
    [104] X. Du, B. Liang, W. Xu, Y. Qiu,“Pose measurement of large non-cooperative satellite basedon collaborative cameras”, Acta Astronautica,2011,68,2047-2065
    [105] J.Salvi, et.al. A comparative review of camera calibrating methods with accuracy evalution[J]. Pattern regognition,2002,35:1617-1635
    [106] J.Heikkil, O.Silvén. A four-step camera calibration procedure with implicit imagecorrection [J]. IEEE Coference,1997:1106-1112
    [107] Q.Zhang, R.Pless. Extrinsic calibration of a camera and laser range finder (improves cameracalibration)[J]. IEEE Conference,2004:2301-2306
    [108] M.B.de Paula, et.al. Automatic on-the-fly extrinsic camera calibration of onboard vehicularcamera [J]. Expert System with Applications,2014,41:1997-2007
    [109] Z.Xie, et.al. Simultaneous calibration of the intrinsic and extrinsic parameters ofstructured-light sensors [J]. Optics and laser in engineering,2014,58:9-18
    [110] X.Gong, et.al. Extrinsic calibration of a3D lidar and a camera using a trihedron [J]. Opticsand laser in engineering,2013,51:394-401
    [111] J.Heikkil. Geometric camera calibration using circular control points [J]. IEEE transactionson PAMI,2000,22(10):10661077
    [112] X.Meng, et.al. A new easy camera calibration technique based on circular points [J]. Patternrecognition,2003,36:1155-1164
    [113] L.Lucchese. Geometric calibration of digital cameras through multi-view rectification [J].Image and Vision Computing,2005,23:517-539
    [114] G.Wang, et.al. Camera calibration and3d reconstruction from a single view based on sceneconstraints [J]. Image and vision computing,2005,23:311-323
    [115] R.Hassanpour, et.al. Camera auto-calibration using a sequence of2d images with smallrotations [J]. Pattern recognition letters,2004,25:989-997
    [116] J.Batista, et.al. Iterative multistep explicit camera calibration [J]. IEEE Transactions onrobotics and automation,1999,15(5):897-91

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700