用户名: 密码: 验证码:
Time-coherent 3D animation reconstruction from RGB-D video
详细信息    查看全文
  • 作者:Naveed Ahmed ; Salam Khalifa
  • 关键词:3D video ; 3D animation ; RGB ; D video ; Temporally coherent 3D animation
  • 刊名:Signal, Image and Video Processing
  • 出版年:2016
  • 出版时间:April 2016
  • 年:2016
  • 卷:10
  • 期:4
  • 页码:783-790
  • 全文大小:1,228 KB
  • 参考文献:1.Carranza, J., Theobalt, C., Magnor, M.A., Seidel, H.-P.: Free-viewpoint video of human actors. ACM Trans. Graph. 22(3), 569–577 (2003)CrossRef
    2.de Aguiar, E., Stoll, C., Theobalt, C., Ahmed, N., Seidel, H.-P., Thrun, S.: Performance capture from sparse multi-view video. ACM Trans. Graph. 27(3), 98:1–98:10 (2008)
    3.Kim, Y.M., Chan, D., Theobalt, C., Thrun, S.: Design and calibration of a multi-view tof sensor fusion system, In: CVPR Workshop (2008)
    4.MICROSOFT: Kinect for microsoft windows and xbox 360. http://​www.​kinectforwindows​.​org/​ (2010)
    5.Ahmed, N.: A system for 360 degree acquisition and 3D animation reconstruction using multiple rgb-d cameras. In: Proceedings of the 25th International Conference on Computer Animation and Social Agents (CASA), Casa’12 (2012)
    6.Ahmed, N., Theobalt, C., Rössl, C., Thrun, S., Seidel, H.-P.: Dense correspondence finding for parametrization-free animation reconstruction from video. In: CVPR (2008)
    7.Debevec, P.E., Hawkins, T., Tchou, C., Duiker, H.-P., Sarokin, W., Sagar, M.: Acquiring the reflectance field of a human face. In: SIGGRAPH, pp. 145–156 (2000)
    8.Hawkins, T., Einarsson, P., Debevec, P.E.: A dual light stage. In: EGSR, pp. 91–98 (2005)
    9.Theobalt, C., Ahmed, N., Ziegler, G., Seidel, H.-P.: High-quality reconstruction of virtual actors from multi-view video streams. IEEE Signal Process. Mag. 24(6), 45–57 (2007)CrossRef
    10.Vlasic, D., Baran, I., Matusik, W., Popovic, J.: Articulated mesh animation from multi-view silhouettes. ACM Trans. Graph. 27(3), 97:1–97:9 (2008)
    11.Tevs, A., Berner, A., Wand, M., Ihrke, I., Seidel, H.-P.: Intrinsic shape matching by planned landmark sampling. In: Eurographics (2011)
    12.Huang, P., Hilton, A., Starck, J.: Shape similarity for 3d video sequences of people. Int. J. Comput. Vis. 89(2–3), 362–381 (2010)CrossRef
    13.Hilaga, M., Shinagawa, Y., Kohmura, T., Kunii, T.L.: Topology matching for fully automatic similarity estimation of 3d shapes. In: SIGGRAPH ’01, pp. 203–212, New York, NY, USA. ACM (2001)
    14.Cagniart, C., Boyer, E., Ilic, S.: Iterative mesh deformation for dense surface tracking. In: ICCV Workshops, ICCV’09 (2009)
    15.Varanasi, K., Zaharescu, A., Boyer, E., Horaud, R.: Temporal surface tracking using mesh evolution. In: ECCV’08, pp. 30–43. Berlin (2008)
    16.Kim, Y.M., Theobalt, C., Diebel, J., Kosecka, J., Micusik, B., Thrun, S.: Multi-view image and tof sensor fusion for dense 3d reconstruction. In: 3DIM, pp. 1542–1549, Kyoto, Japan. IEEE (2009)
    17.Castaneda, V., Mateus, D., Navab, N.: Stereo time-of-flight. In: ICCV (2011)
    18.Weiss, A., Hirshberg, D., Black, M.J.: Home 3d body scans from noisy image and range data. In: ICCV (2011)
    19.Baak, A., Muller, M., Bharaj, G., Seidel, H.-P., Theobalt, C.: A data-driven approach for real-time full body pose reconstruction from a depth camera. In: ICCV (2011)
    20.Girshick, R., Shotton, J., Kohli, P., Criminisi, A., Fitzgibbon, A.: Efficient regression of general-activity human poses from depth images. In: ICCV (2011)
    21.Berger, K., Ruhl, K., Schroeder, Y., Bruemmer, C., Scholz, A., Magnor, M.A.: Markerless motion capture using multiple color-depth sensors. In: VMV, pp. 317–324 (2011)
    22.Rusu, R.B., Cousins, S.: 3D is here: Point Cloud Library (PCL). In: ICRA (2011)
    23.Lowe, D.G.: Object recognition from local scale-invariant features. In: ICCV, pp. 1150–1157 (1999)
    24.Bernardin, K., Elbs, A., Stiefelhagen R.: Multiple object tracking performance metrics and evaluation in a smart room environment. In: 6th IEEE International Workshop on Visual Surveillance, VS 2006, Graz, Austria (2006)
  • 作者单位:Naveed Ahmed (1)
    Salam Khalifa (1)

    1. Department of Computer Science, University of Sharjah, Sharjah, UAE
  • 刊物类别:Engineering
  • 刊物主题:Signal,Image and Speech Processing
    Image Processing and Computer Vision
    Computer Imaging, Vision, Pattern Recognition and Graphics
    Multimedia Information Systems
  • 出版者:Springer London
  • ISSN:1863-1711
文摘
We present a new method to reconstruct a time-coherent 3D animation from RGB-D video data using unbiased feature point sampling. Given RGB-D video data, in the form of a 3D point cloud sequence, our method first extracts feature points using both color and depth information. Afterward, these feature points are used to match two 3D point clouds in consecutive frames independent of their resolution. Our new motion vector-based dynamic alignment method then fully reconstructs a spatio-temporally coherent 3D animation. We perform extensive quantitative validation using a novel error function, in addition to the standard techniques in the literature, and compared our method to existing methods in the literature. We show that despite the limiting factors of temporal and spatial noise associated to RGB-D data, it is possible to extract temporal coherence to faithfully reconstruct a temporally coherent 3D animation from RGB-D video data.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700