用户名: 密码: 验证码:
Depth Map Enhancement with Interaction in 2D-to-3D Video Conversion
详细信息    查看全文
  • 刊名:Lecture Notes in Computer Science
  • 出版年:2017
  • 出版时间:2017
  • 年:2017
  • 卷:10092
  • 期:1
  • 页码:183-193
  • 参考文献:1.Karsch, K., Liu, C., Kang, S.B.: Depth transfer: depth extraction from video using non-parametric sampling. IEEE Trans. Pattern Anal. Mach. Intell. 36(11), 2144–2158 (2014)CrossRef
    2. https://​en.​wikipedia.​org/​wiki/​stereoscopy . Accessed 6 May 2016
    3. http://​www.​usnews.​com/​news/​articles/​2015/​09/​24/​samsung-oculus-make-virtual-reality-affordable . Accessed 4 May 2016
    4.Zhang, L., Tam, W.J.: Stereoscopic image generation based on depth images for 3D TV. IEEE Trans. Broadcast. 51(2), 191–199 (2005)CrossRef
    5.Liu, B., Gould, S., Koller, D.: Single image depth estimation from predicted semantic labels. In: 2010 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1253–1260. IEEE (2010)
    6.Konrad, J., Wang, M., Ishwar, P.: 2D-to-3D image conversion by learning depth from examples (2012)
    7.Saxena, A., Sun, M., Ng, A.Y.: Make3D: learning 3D scene structure from a single still image. IEEE Trans. Pattern Anal. Mach. Intell. 31(5), 824–840 (2009)CrossRef
    8.Eigen, D., Puhrsch, C., Fergus, R.: Depth map prediction from a single image using a multi-scale deep network. In: Advances in Neural Information Processing Systems, pp. 2366–2374 (2014)
    9.Oliva, A., Torralba, A.: Modeling the shape of the scene: a holistic representation of the spatial envelope. Int. J. Comput. Vis. 42(3), 145–175 (2001)CrossRef MATH
    10.Liu, C.: Beyond pixels: exploring new representations and applications for motion analysis. Ph.D. dissertation. Citeseer (2009)
    11.Liu, C., Yuen, J., Torralba, A., Sivic, J., Freeman, W.T.: SIFT flow: dense correspondence across different scenes. In: Forsyth, D., Torr, P., Zisserman, A. (eds.) ECCV 2008. LNCS, vol. 5304, pp. 28–42. Springer, Heidelberg (2008). doi:10.​1007/​978-3-540-88690-7_​3 CrossRef
    12.Karsch, K., Liu, C., Kang, S.B.: Depth extraction from video using non-parametric sampling (2012)
    13.Pietikainen, M., Heikkila, M.: A texture-based method for modeling the back-ground and detecting moving objects. IEEE Trans. Pattern Anal. Mach. Intell. 28(4), 657–662 (2006)CrossRef
    14.Deschamps, A., Howe, N.R.: Better foreground segmentation through graph cuts (2004)
    15.Behnke, S., Stuckler, J.: Efficient dense rigid-body motion segmentation and estimation in RGBD video. Int. J. Comput. Vis. 113(3), 233–245 (2015)MathSciNet CrossRef
    16. http://​research.​microsoft.​com/​en-us/​downloads/​29d28301-1079-4435-9810-74709376bce1/​ . Accessed 20 May 2016
  • 作者单位:Tao Yang (17)
    Xun Wang (17)
    Huiyan Wang (17)
    Xiaolan Li (17)

    17. School of Computer and Information Engineering, Zhejiang Gongshang University, Hangzhou, 310018, China
  • 丛书名:Transactions on Edutainment XIII
  • ISBN:978-3-662-54395-5
  • 卷排序:10092
文摘
The demand for 3D video content is growing. Conventional 3D video creation approaches need certain devices to take the 3D videos or lots of people to do the labor-intensive depth labeling work. To reduce the manpower and time consumption, many automatic approaches has been developed to convert legacy 2D videos into 3D. However, due to the strict quality requirements in video production industry, most of the automatic conversion methods are suffered from many quality issues and could not be used in the actual production. As a result manual or semi-automatic 3D video generation approaches are still mainstream 3D video generation technologies. In our project, we took advantage of an automatic video generation method and tried to apply human-computer interactions in its process procedure [1] in the aim to find a balance between time efficiency and depth map generation quality. The novelty of the paper relies on the successful attempt on improving an automatic 3D video generation method in the angle of video and film industry.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700