用户名: 密码: 验证码:
基于数字图像的三维尺寸测量
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
计算机视觉技术具有非接触和自动化程度高的特点,使得这项技术在零件的表面质量检测、尺寸测量和形状识别等方面有着广阔的应用前景。本文利用物体的数字图像,围绕三维尺寸测量,对边缘检测、CCD相机的双目标定和点的三维坐标测量等问题进行了理论和实验研究。根据标量衍射理论,研究提出了一种含修正参数的贝塞尔型点扩散函数,并将该点扩散函数与阶梯边缘模型卷积,获得了含修正参数的边缘灰度分布模型。根据最小二乘原理,利用含修正参数的边缘灰度分布模型,研发了一种亚像素边缘检测的拟合算法,并从理论和实验两个方面分析了拟合窗口大小和灰度差等因素对算法检测边缘分辨率的影响。在分析了Tsai的标定法的基础上,提出了一种改进的标定法,实验表明该方法在不降低标定精度的情况下,算法更简洁,同时研究了图像噪声对标定结果的影响。通过对点的三维坐标测量方法的分析,利用最小二乘法通过实验对点的三维坐标、物体的二维尺寸和三维尺寸进行了测量。
     本文的工作对发展数字图像测量技术和指导图像测量技术的工程应用具有一定的意义。
Computer vision research aims at getting structure information of three dimensional objects from two dimensional images. Three dimensional mesurement is one of the most important theories and it is used widely as one of the important research fields of computer vision. However, spatial point is the essential unit made up of three dimension structure. In theory, points are made up of line and lines are made up of plane. Moreover, planes are made up of three dimension structure. In computer vision, three dimensional of points is most essential not only full pixel but also three dimensional shapes. On occasions there are many characteristic points. The positions of these points are determined in order to determine three dimension structure. So these characteristic points are made up of spatial structure image. Therefore, the paper discussed the problem of three dimension mesurement in computer vision. The advantages of three dimension mesurement are untouched, high speed, high precise and high anti-jamming and so on.
     The essence of restoring spatial points is to get three dimensional coordinates based on camera optical model. While calibration of inner and outer parameters of model is to calibrate transform relationship between world and camera coordinate frame, between camera and image plane coordinate frame, between image plane and pixel coordinate frame and between world and image pixel coordinate frame. So we finished CCD camera monocular vision calibration. In order to realize three dimensional mesurement of points, monocular vision is not enough and it need get spatial depth. So based on monocular vision, we carry out to binocular vision calibration. The work to three mesurement concludes sub-pixel edge detection arithmetic based on fitting method, analysis of sub-pixel edge detection based on amendatory Bezier function, binocular vision calibration and three dimensional mesurement of points used to measure parts size.
     Sub-pixel edge detection arithmetic is based on fitting method. For orientation precise of edge detection, it can be classified with pixel edge detection and sub-pixel edge detection. Pixel edge detection operators are Sobel, Laplacian, Canny and so on. Their advantages are high speed but they can’t orientate edge precisely. Because cell of standard sense plane of CCD’s size is normal, the edge of image doesn’t always fall on the edge of cell exactly. It can result in real edge information lost of object during imaging. The aim at sub-pixel edge orientation is to find out points inner pixel on the real edge of object by arithmetic exactly or approximatively. Hueckel provided sub-pixel edge detection technique at first. Existing sub-pixel edge detection operators are classified with three types based on spatial moments, least square fitting and interpolation. Among them fitting arithmetic is least square fitting on the hypothesis of grey value of edge model to get sub-pixel edge orientation which is higher precise than the former two methods. It is stable and rough for noise. Fitting method is classified with polynomial fitting and least square fitting. The latter is adapted. We provide a kind of sub-pixel edge orientation arithmetic, amendatory Bezier fitting, based on least square fitting method using model of distributed grey of edge and write the program according to it. The analysis of two methods show that edge fitting residual adapting fitting provided in the paper is smaller than by Gauss fitting. It shows sub-pixel edge position by fitting method is accuracy and disperser. It is fit to be applied to sub-pixel edge orientation of linear edge for planar parts.
     The analysis of sub-pixel edge detection arithmetic is based on amendatory Bezier function. Arbitrary plane can be looked as combination made up of numberless small units. While, every small unit can be looked as a function onδ. We can get arbitrary distributing of photo field after imaging by linear iterative on the circumstance of light vibration distributing which we can clearly understand for lens or imaging system. So we can get intensity distributing of image plane. We analysis sub-pixel edge detection arithmetic and Gauss fitting arithmetic based on amendatory Bezier function by designing many experiments.
     Binocular calibration is based on CCD camera. Computer vision system begin to obtain image information by CCD camera in order to compute position of 3D object and geometry information of shape and then to recognize object. Lightness of each point in the image plane can reflect intensity of reflex of a point of the spatial object. While, the point position in the image plane is related to accordingly geometry point position of in the spatial object surface. The relationships depend on the geometry model by CCD imaging. Camera calibration and untouched measurement are important parts which are be obtained by testing and computing parameters of geometry model by camera imaging. The goal is to confirm according relationship between image coordinate frame of camera and 3D world coordinate frame of spatial object. Based on 2D coordinate of image plane, we can deduce the real position of the according spatial object in order to restore inner and outer parameters of camera. We can restore 3D data information of observed object to realize 3D measurement using these parameters and building camera stereo imaging model with stereo vision. In binocular stereo vision, we must confirm relative position and orientation between two cameras. In many occasions, inner parameters and position relationship between cameras needn’t be solved and need build a reflection relation between 2D coordinate of protection and 3D coordinate of observed point. Because of manufacture error and assembling error lying in camera optical system, real image of object protected on camera image plane and ideal image lie in optical distortion errors. Lens distortion error can reduce camera calibration precise resulting in measurement precise reducing. So we have to consider lens distortion when calibration measurement. With CCD camera binocular calibration, we provide an improved calibration method based on Tsai’s calibration method. The experiment results show the method provided in the paper is simpler instead of reducing calibration precise.
     3D coordinate of point and size of object are measured. Getting 3D coordinate of point is an important problem in computer vision field and a critical step finishing simulating human’s eyes function with computer. Only using 3D spatial measurement, we can restore 3D stereo information of object from 2D depth information of image coordinate lost. In stereo vision technique, 3D spatial measurement is inverse course of camera calibration in fact. At first, we can get inner and outer parameters calibrated by CCD camera. Then, we can correct camera parameters distortion. Finally, we bring camera calibration parameters into the model of 3D measurement in order to restore 3D point. 3D measurement of point is untouched and high precise. It is applied to measure parts size.
引文
[1] D.Vision,W.H.Freeman and Company San Francisco[M],中译本:视觉计算理论,姚国正,刘磊,汪云九译,科学出版社,1988.
    [2] Brown D C.Decentering Distortion of Lenses.Photogrammertric Eng.Remote Sensing.1966,444-462.
    [3] Wong K W.Mathematical foundation and digital analysis in close-range photogrammetry[J].In:Proc.13th Congress of the Int.Society for Photogrammetry.1976, 1355-1373.
    [4] YakimovskyY,CunninghamR.A system for extracting three dimensional measurements from a stereo pair of TV cameras[J].Computer Graphics and Image Processing, 1978, 7: 195-210.
    [5] Hall E L,Tio M B K,McPherson C A,Sadjadi F A.Curved surface measurement and recognition for robot vision[J].In:Proc.IEEE Workshop on Industrial Application of Machine Vision. 1982:187-199.
    [6] Itoh A M,Ozawa S.Distance measuring methods using only simple vision constructed for moving robots[J]. In: Proc. ICPR' 84.1: 192-197.
    [7] Luh J Y,Klaasen J A.A three dimensional vision by off-shelf system with multi-cameras.IEEE Trans.PAMI,1985[C],7(1):35-45.
    [8] Longuet-Higgins H C.A computer algorithm for reconstruction a scene two projections [J].Nature.1981, 293(10):133-135.
    [9] Faugeras O, Robert L, Laveau S.3D reconstruction of urban scenesfrom images equences [OL], http://www.sciencedirect.com/science/ journal/10773142.
    [10]李仁举,钟约先,由志福,龙玺.三维测量系统中摄像机定标技术[J].清华大学学报(自然科学版).2002,4(42):481-483.
    [11] Cochran,Steven D.Medioni,Gerard,3D surface description from binocular stereo[J],IEEE trans.PAMI,Oct,1992,14(10):981-994.
    [12] Barnard,S,Fischler,M.,Computational stereo[J], ACM Computer Surveys, 1982, 14(4): 553-572.
    [13] H.Y.Shum,R.Szeliski,S.Baker,M.Han and P.Anandan,Interative 3D modeling from multiple images using scene regularities[J]. SMILE, 1998.
    [14] J.Salvi.An Approach to Coded Structured Light to Obtain Three Dimensional Information[D].Ph.D dissertation,University de Girona,1997.
    [15] M.Chantler.The Effect of Variation in Illuminant Direction on Texture Classfication[D].Ph.D thesis,Dept.Computing and Electrical Engeneering, Heriot-Watt University, 1994.
    [16]陈越等.从二维系列摄影图片提取剪影重构三维实体的光线跟踪算法[J].中国图像图形学报.2002,7(8):806-813.
    [17] Esteban C.H.,Schmitt F.,Silhouette and stereo fusion for 3D object modeling[C], Proceedings of the Fourth International Conference on 3D Digital Imaging and Modeling, 2003:46-53.
    [18] Tosovic S.,Sabatnig R.,3D modeling of archaeological vessels using shape from silhouette[C],Proceeding of Third International Conference on 3D Digital Imaing and Modeling,2001:51-58.
    [19]刘先锋,沈胜宏,郑明浩.边缘保留的图像噪声滤除方法[J].电子技术应用,2000,11:15-17.
    [20] F. Tomita and S. Tsuyi.Extraction of multiple regions by smoothing in selected neighbourhoods [J].IEEE Trans,Syst. Man. Cybernetics SMC,1977,7:107-109.
    [21] Wang D and Vagnucci A.Gradient inverse weighting smoothing schema and the evaluation of its performance[J].Computer Graphics and Image Processing,l981,15:1054-l065.
    [22]谢凤英,赵丹培.Visual C++数字图像处理[M].电子工业出版社,2008.
    [23]黄全品,王绪本.一种高效的基于阈值的图像滤波算法及其实现[J].计算机仿真,2005,22(5):111-114.
    [24]贾永红.数字图像处理[M].武汉:武汉大学出版社,2003.
    [25] Canny . Computational approach to edge detection[J] . IEEE trans.PAMI,pp.679-698,1986.
    [26]杨杨,张田文.角点检测算法评价方法的研究[J].哈尔滨工业大学学报,1998,4(2):7-9.
    [27]周富强,杨学友.双目视觉传感器的现场标定技术[J].仪器仪表学报.2000 (4): 142-145.
    [28]张健新,段发阶.简便的高精度摄像机标定技术[J].仪器仪表学报.1999,20 (2):193-196.
    [29]孙长库,魏嵬,张效栋等.CCD摄像机参数标定实验设计[J].光电子技术与信息,2005,18(2):43-46.
    [30]祝世平,强锡富.基于坐标测量机的双目视觉测距误差分析[J].电子测量与仪器学报.2000,(2):26-32.
    [31] Brauer-Burchardt C, Voss K. Automatic lens distortion calibrationusing single views[A].In: G. Sommer, N. Kruger,C.Perwass,eds. Mustererkennugn 2000.Proceedings of DAGM Symposium [C],Kiel, Germany, Informatik Aktuell Springer,2000:187-194.
    [32] Du P, Zhang Z. Geometric transformation of pinched hallways (Course project, 2001)[EB/OL].http://www.eng.iastate.edu/ee528/ Projects/Project-s2001/results/pandu/EE528project1. PDF, 2003- 03-28.
    [33] Fiala M. Automatic extraction of radial distortion parameters [EB/OL]. http://www.cs.uallberta.ca/~fiala/radfind.htm, 2003- 03-28.
    [34] Devernay F, Faugeras O. Straight lines have to be straight: automatic calibration and removal of distortion from scenes of structured environments[J]. Machine Vision and Applications, 2001, 13(1):14-24.
    [35] Hartley R, Zisserman A著.计算机视觉中的多视图几何[M].韦穗等译,合肥:安徽大学出版社,2002:130-132.
    [36]沈洪宇,柴毅.计算机视觉中双目视觉综述[J].科技资讯. 2007,34:150-151.
    [37]刘金颂,原思聪,张庆阳,刘道华.双目立体视觉中的摄像机标定技术研究[J].计算机工程与应用.2008, 44( 6):237-239.
    [38] Feng Wenyi,Liu Bin, Ling Xieting etal.Unsupervised image shape restoration based on IDP[J]. Journal of Fudan University(Natural Science,1995, 34(3):255-261.[丰文义,刘斌,凌燮亭等.基于独立性参数的无导师图像变形校正[J].复旦大学学报(自然科学), 1995, 34(3): 255-261].
    [39] Liao Shizhong,Gao Peihuan,Su Yi etal.A geometric rectification method for lens camera [J].Journal of Image and Graphics,2000,5(7):593-595. [廖士中,高培焕,苏艺等.一种光学镜头摄影机图像几何畸变的修正方法[J].中国图象图形学报, 2000, 5(7): 593-595。].
    [40] Wang Yadong, Ding Mingyue, Peng Jiaxiong. Correction of distortion image based on camera model[J]. ACTA Automatica Sinica, 1997,23(5): 717-720.[王亚东,丁明跃,彭嘉雄.一种基于摄像机模型的畸变图像校正方法[J].自动化学报,1997,23(5):717-720.].
    [41] Zeng Luan. A remedy method of aberration in the lens of short focus[J].Journal of Institute of Command and Technology of Equipment.2002,13(2):53-55. [曾峦.短焦距摄像机镜头的畸变校正方法[J].装备指挥技术学院学报,2002,13(2):53-55.].
    [42] Zhang Yanzhen,Ou Zongying,Xue Bindang.Error correction method based on slope for camera radial distortion[J].Mini-Micro System,2002,23(5):625-627. [张艳珍,欧宗瑛,薛斌党.一种基于斜率的摄像机畸变校正方法[J].小型微型计算机系统,2002,23(5):625-627。].
    [43] Wang Guoyou, Yu Like, Zhang Tianxu, etal. A new method for geometric correction of distortion image with large field of view[J].Journal of Data Acquisition & Processing. 1996, 11(2): 112-115. [汪国有,俞立科,张天序等.一种新的大视场景象的几何失真校正方法[J].数据采集与处理,1996,11(2):112-115.
    [44] Juyang W,Paul C,Marc H. Camera calibration with distortion models and accuracy evaluation[J].IEEE Transactions on Pattern Analysis and Machine Intellegence,1992,14(10):965-980.
    [45] Yoshihiko N,Michihiro S,Hiroshi N,etal. Simple calibration algorithm for high-distortion-lens camera[J].IEEE Transactions on Pattern Analysis and Machine Intelligence, 1992, 14(11): 1095- 1099.
    [46]徐建华.图像处理与分析[M].北京:科学出版社,1992.
    [47]朱颖,江泽涛.基于Soble算子的亚像素边缘检测方法[J].南昌航空工业学院学报(自然科学版).2005,19(2):100-102.
    [48]万军,徐汀荣.基于Laplacian算子的图像边缘检测方法研究[J].现代电子技术.2004,21:92-96.
    [49]张斌,贺赛先.基于Canny算子的边缘提取改善方法[J].红外技术.2006,28(3):165-169.
    [50]王勇,吕征宇等.一种基于空间矢量调制的矩阵变换器死区补偿方法[J].中国电机工程学报.2005,25(11):42-45.
    [51]仲伟川,赵光兴等.多项式最小二乘拟合法在CCD采样曲线拟合中的应用[J].安徽工业大学学报.2001,18(3):242-244.
    [52]邓建中.加速迭代收敛的二次插值法[J].工程数学学报.1998,15(1):117-120.
    [53]张辉,张丽艳,陈江,赵转萍.基于平面模板自由拍摄的双目立体测量系统的现场标定[J].航空学报. 2007,28(3):695-701.
    [54]单晓明.数字图像的盲复原研究[D].南京:南京理工大学,2005.
    [55] Roger Y.Tsai.A Versatile Camera Calibration Technique for High-Accuracy 3D Machine Vision Metrology Using Off-the-Shelf TV Camera and Lenses[J].IEEE Journal of Robotics and Automation. 1987, 3(4): 323-344.
    [56] Zhao Xun-po,Wang Lu,Hu Zhan-yi. A Perceptual Objec Based AttentionMechanism for Scene Analysis[J].Journal of Image and Graphics. 2006,11(2): 281-288.
    [57] Abedl-aziz Y, Karara H. Direct linear transformation into object space coordinates in close-range photogrammetry[J].In Proceedings of ASP/UI Symposium on Close-range photogrammetry Urbana, Illinois, January 1971, 420-475.
    [58] Pollefeys, Marc (K.U. Leuven); Koch, Reinhard; Van Gool, Luc. Self-calibration and metric reconstruction inspite of varying and unknown intrinsic camera parameters[J]. International Journal of Computer Vision,1999,32(1):7-25.
    [59] Faugeras O, Robert L, Laveau S.3D reconstruction of urban scenes from images equences [OL], http://www.sciencedirect.com/science /journal/10773142.
    [60] Hartley R. Lines and points in three views and the trifocal tensor [J].International Journal of Computer Vision, 1997, 22(2): 125- 140.
    [61] Sturm P, Triggs B.A factorization based algorithm for multi-image projective structure and motion[A].In: Proceedings of 4th European Conference on Computer vision,Cambridge,1996:709-720.
    [62]高玮,彭群生.基于二维视图特征的三维重建[J].计算机学报,1999,22(5):481-485.
    [63]刘世霞,胡事民,汪国平,孙家广.基于三视图的三维形体重建技术[J].计算机学报,2000,23(2):141-146.
    [64]彭荣华,钟约先,张吴明.人体三维无接触测量系统的研究[J].计量技术,2004(2):36-38.
    [65]张凯丽,刘辉.边缘检测技术的发展研究[J].昆明理工大学学报.2000,25(5):36-39.
    [66] Canny J. A computational approach to edge detection[J]. IEEE-PAMI, 1986, 8: 679-698.
    [67] Min C. Shin. Comparison of Edge Detector Performance through Use in an Object Recognition Task[J]. Computer Vision and Image Understanding. 84,160-178(2001).
    [68] Mike Heath, Sudeep Sarkar, Thomas Sanocki. Comparison of Edge Detectors[J]. Computer Vision and Image Understanding.1998 January, Vol.69,No.1,38-54.
    [69] T. Hermosilla, E. Bermejo, A. Balaguer,L.A. Ruiz. Non-linear fourth-order image interpolation for subpixel edge detection and localization[J]. Image and Vision Computing, In Press, Accepted Manuscript, Available online 10 March 2008:1302-1308.
    [70] Toshiro Kubota. Massively parallel networks for edge localization and contour integration-adaptable relaxation approach[J]. Neural Networks, Volume 17, Issue 3, April 2004:411-425.
    [71] Scott Konishi , Alan L. Yuille , James M. Coughlan , Song Chun Zhu. Statistical Edge Detection: Learning and Evaluating Edge Cues[J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 25, NO. 1, JANUARY 2003.
    [72] Wei Xu, Michael Jenkin, Yves Lesp erance. A Multi-Channel Algorithm for Edge Detection Under Varying Lighting Conditions[J]. IEEE Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’06):1203-1209.
    [73] A. Perna ,M.C. Morrone. The lowest spatial frequency channel determines brightness perception[J]. Vision Research, Volume 47, Issue 10, May 2007: 1282-1291.
    [74] Jian Ye, Gongkang Fu, Upendra P. Poudel. High-accuracy edge detection with Blurred Edge Model[J]. Image and Vision Computing, Volume 23, Issue 5, 1 May 2005: 453-467.
    [75] CCD&CMOS图像和机器视觉产品手册[M].北京:凌云光视数字图像公司,凌云光子技术集团.
    [76] Vidya Venkatachalam , Richard M. Wasserman. Comprehensive Investigation of Sub-pixel Edge Detection Schemes inMetrology[J]. Proceedings of SPIE-IS&T Electronic Imaging, SPIE Vol. 5011 (2003):200-211.
    [77] C.McGlone,E.Mikhail,J.Bethel,Manual of Photogrammetry[J],fifth ed.,American Society of Photogrammetry and Remote Sensing,2004.
    [78]夏德深,傅德胜.现代图像处理技术与应用[M].南京东南大学出版社,1997.
    [79] Tosovic S.,Sabatnig R.,3D modeling of archaeological vessels using shape from silhouette[C],Proceeding of Third International Conference on 3D Digital Imaing and Modeling,2001:51-58.
    [80] Esteban C.H.,Schmitt F.,Silhouette and stereo fusion for 3D object modeling[C], Proceedings of the Fourth International Conference on 3D Digital Imaging and Modeling, 2003:46-53.
    [81] M.Chantler.The Effect of Variation in Illuminant Direction on Texture Classfication[D].Ph.D thesis,Dept.Computing and Electrical Engeneering, Heriot-Watt University, 1994.
    [82] D.Gibbins.Estimating Illumination Conditions for Shape from Shading[D].Ph.D thesis,The Flinders University of South Australia Bedford Park,Adelaide Australia,1994.
    [83] J.Salvi.An Approach to Coded Structured Light to Obtain Three Dimensional Information[D].Ph.D dissertation,University de Girona,1997.
    [84] F.Solomon and K.Ikeuchi.Extracting the Shape and Roughness of Specular Lobe Objects Using Four Light Phtometric Stereo[J].IEEE Transactions on Pattern Analysis and Machine Intelligence,1996,18:449-454.
    [85] P.Favaro and S.Soatto.Learning Shape from Defocus[J].Proceeding of European Conference on Computer Vision,volume 2 of Lecture Notes in Computer Science,2002:735-745.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700