文摘
We present a new method to reconstruct a time-coherent 3D animation from RGB-D video data using unbiased feature point sampling. Given RGB-D video data, in the form of a 3D point cloud sequence, our method first extracts feature points using both color and depth information. Afterward, these feature points are used to match two 3D point clouds in consecutive frames independent of their resolution. Our new motion vector-based dynamic alignment method then fully reconstructs a spatio-temporally coherent 3D animation. We perform extensive quantitative validation using a novel error function, in addition to the standard techniques in the literature, and compared our method to existing methods in the literature. We show that despite the limiting factors of temporal and spatial noise associated to RGB-D data, it is possible to extract temporal coherence to faithfully reconstruct a temporally coherent 3D animation from RGB-D video data.