用户名: 密码: 验证码:
A Novel Biologically Inspired Visual Saliency Model
详细信息    查看全文
  • 作者:Jingjing Zhao (1)
    Shujin Sun (2)
    Xingtong Liu (2)
    Jixiang Sun (2)
    Afeng Yang (2)
  • 关键词:Visual attention ; Saliency map ; Proto ; object ; Support vector machines ; Biologically inspired model
  • 刊名:Cognitive Computation
  • 出版年:2014
  • 出版时间:December 2014
  • 年:2014
  • 卷:6
  • 期:4
  • 页码:841-848
  • 全文大小:1,450 KB
  • 参考文献:1. Wu H, Wang Y, Feng K, Wong T, Lee TY, Heng P. Resizing by symmetry–summarization. ACM Trans Graph.?2010;29(6):1591-. CrossRef
    2. Siagian C, Itti L. Biologically-inspired robotics vision Monte-Carlo localization in the outdoor environment. In: IEEE/RSL international conference on intelligent robots and systems; 2007. p. 1723-0.
    3. Tong Y, Cheikh F, Guraya F, Konik H, Tremeau A. A spatiotemporal saliency model for video surveillance. Cogn Comput. 2011;3:241-3. CrossRef
    4. Harding P, Robertson N. Visual saliency from image features with application to compression. Cogn Comput. 2013;5:76-8. CrossRef
    5. Treisman AM, Gelade G. A feature-integration theory of attention. Cogn Psychol. 1980;12(1):97-36. CrossRef
    6. Wolfe JM, Cave KR, Franzel SL. Guided search: an alternative to the feature integration model for visual search. J Exp Psychol Hum Percept Perform. 1989;15(3):419-3. CrossRef
    7. Koch C, Ullman S. Shifts in selective visual attention: towards the underlying neural circuitry. Human Neurbiol. 1985;4:219-7.
    8. Harel J, Koch C, Perona P. Graph-based visual saliency. In: NIPS; 2006. p. 545-2.
    9. Hou X, Zhang L. Saliency detection: a spectral residual approach. In: CVPR; 2007. p. 1-.
    10. Achanta R, Hemami S, Estrada F, Süsstrunk S. Frequency-tuned salient region detection. In: CVPR; 2009. p. 1597-04.
    11. Zhai Y, Shah M. Visual attention detection in video sequences using spatiotemporal cues. In: ACM multimedia; 2006. p. 815-4.
    12. Cheng M, Zhang G, Mitra N, Huang X, Hu S. Global contrast based salient region detection. In: IEEE international conference on computer vision and pattern recognition; 2011. p. 409-6.
    13. Li J, Levine MD, An X, Xu X, He H. Visual saliency based on scale-space analysis in the frequency domain. IEEE Trans Pattern Anal Mach Intell. 2013;35(4):996-010. CrossRef
    14. Henderson JM, Brockmole JR, Castelhano MS, Mack M. Visual saliency does not account for eye movements during visual search in real-world scenes. In: Eye movements: a window on mind and brain; 2007. p. 537-2.
    15. Navalpakkam V, Itti L. Modeling the influence of task on attention. Vision Res. 2005;45(2):205-1. CrossRef
    16. Elazary L, Itti L. A bayesian model for efficient visual search and recognition. Vision Res. 2010;50(14):1338-2. CrossRef
    17. Gao D, Mahadevan V, Vasconcelos N. On the plausibility of the discriminant center-surround hypothesis for visual saliency. J Vis. 2008;8(7):1-8. CrossRef
    18. Judd T, Ehinger K, Durand F, Torralba A. Learning to predict where humans look. In: IEEE international conference on computer vision; 2009.
    19. Jonides J. Further towards a model of the mind’s eye’s movement. Bull Psychon Soc. 1983;21(4):247-0. CrossRef
    20. Posner MI, Petersen SE. The attention system of the human brain. Annu Rev Neurosci. 1990;13:25-2. CrossRef
    21. Posner MI, Rothbart MK. Attention, self-regulation and consciousness. Philos Trans R Soc B Biol Sci. 1998;353:1915-7. CrossRef
    22. Astle DE, Scerif G. Using developmental cognitive neuroscience to study behavioral and attentional control. Dev Psychobiol. 2009;51(2):107-8. CrossRef
    23. Rensink R. Seeing, sensing, and scrutinizing. Vision Res. 2000;40(10-2):1469-7. CrossRef <
  • 作者单位:Jingjing Zhao (1)
    Shujin Sun (2)
    Xingtong Liu (2)
    Jixiang Sun (2)
    Afeng Yang (2)

    1. College of Humanities and Social Sciences, National University of Defense Technology, Changsha, 410073, Hunan, People’s Republic of China
    2. College of Electronic Science and Engineering, National University of Defense Technology, Changsha, 410073, Hunan, People’s Republic of China
  • ISSN:1866-9964
文摘
The paper focuses on the modeling of visual saliency. We present a novel model to simulate the two stages of visual processing that are involved in attention. Firstly, the proto-object features are extracted in the pre-attentive stage. On the one hand, the salient pixels and regions are extracted. On the other hand, the semantic proto-objects, which involve all possible states of the observer’s memories such as face, person, car, and text, are detected. Then, the support vector machines are utilized to simulate the learning process. As a consequence, the association between the proto-object features and the salient information is established. A visual attention model is built via the method of machine learning, and the saliency information of a new image can be obtained by the way of reasoning. To validate the model, the eye fixations prediction problem on the MIT dataset is studied. Experimental results indicate that the proposed model effectively improves the predictive accuracy rates compared with other approaches.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700