基于人脸的性别识别
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
人脸是重要的生物特征之一,人脸图像上蕴含了大量的信息,例如性别、年龄、人种、身份等。人脸的性别识别就是试图赋予计算机根据输入的人脸图像判断其性别的能力。
     本文基于人脸正面图像进行性别分类。一般而言,人脸性别识别系统分为图像预处理,人脸特征提取和分类器识别三部分。本文针对这三个部分展开研究,并比较了几种不同方案的识别性能。
     通过对性别识别的重要理论的研究,为了提高性别识别率,本文提出了采用AdaBoost算法提取整体特征,主动表观模型提取局部特征,组合局部与整体特征后使用支持向量机(SVM)进行分类的方法。本文在一个由AR、FERET、CAS-PEAL-R1、网上收集和实验室自行采集所共同组成的,包含21,300余张人脸的数据库上,进行了大量有意义的实验。实验结果显示,融合了整体特征和局部特征后,识别率比基于(单独)整体特征的、基于(单独)局部特征的有很大的提高,达到了90%以上。本文还通过精心设计的对比实验对预处理过程中人脸有效区域的截取和AdaBoost的结构选择等方面给出了合理的建议。
Face is an important biological feature. Face images contain a great deal of information, such as gender, age, race, ID, etc. Gender recognition is an attempt to give the computers the ability to discriminate the gender information from a face image.
     Generally speaking, one gender classification system consists of three modules, face image preprocessing, facial feature extraction and classifier. This paper researches on the three modules and compares some different methods.
     In this paper, a novel gender classification method based on frontal face images is presented. In this work, the global features are extracted using AdaBoost algorithm. Active Appearance Model (AAM) locates 83 landmarks, from which the local features are characterized. After the fusion of the local and global features, the mixed features are used to train support vector machine (SVM) classifiers. This method is evaluated by the recognition rates over a mixed face database containing over 21,300 images from 4 sources (AR, FERET, CAS-PEAL-R1, WWW and a database collected by the lab). Experimental results show that the hybrid method outperforms the unmixed appearance- or geometry-feature based methods and achieve a classification rate over 90%. Reasonable suggestions on the extraction of facial region and the selection of AdaBoost structure is given based on carefully designed experiments.
引文
[1] B A Golomb, D T Lawrence and T J Sejnowski. SEXNET: A Neural Network identifies sex from human face. Advances in Neural Information Processing Systems. 1991, 572-577.
    [2] G W Cottrell, J Metcalfe. EMPATH:Face, emotion and gender recognition using holons. Advances in Neural Information Processing Systems. 1991, 564-771.
    [3] Saatci Y., Town C., Cascaded Classification of Gender and Facial Expression using Active Appearance Models, FGR 2006. 2006, 1613052:393-398.
    [4] R. Brunelli and T. Poggio. Hyperbf networks for gender classification, DARPA Image Understanding Workshop. 1992, 311-314.
    [5] S. H. Tamura, Kawai, and H. Mitsumoto,“Male/Female Identification from 8 x 6 Very Low Resolution Face Images by Neural Network”, Pattern Recognition. 1996, 29(2):331-335.
    [6] Abdi H, Valentin D, Edelman B, O'Toole A J, More about the difference between men and women: evidence from linear neural networks and the principal-component approach. Perception. 1995, 24(5):539-562.
    [7] B.Moghaddam and M. H. Yang. Gender Classification with Support Vector Machines. IEEE Trans. On PAMI. 2002, 24(5):707-711.
    [8] G. Shakhnarovich, P. Viola and B. Moghaddam. A Unified Learning Framework for Real Time Face Detection and Classification. IEEE conf. on AFG 2002. 2002.
    [9] S. Baluja, H.A. Rowley, Boosting sex identification. performance, International Journal of Computer Vision. 2007, 71(1):111-119.
    [10] Costen N. P., Brown M., Akamatsu S., Sparse Models for Gender Classification, Sixth IEEE International Conference on Automatic Face and Gesture Recognition. 2004.
    [11] M. Kirby and L. Sirovich, Application of the Karhunen-Loe`ve Procedure for the Characterization of Human Faces, IEEE Trans. Pattern Analysis and Machine Intelligence. 1990, 12(1):103-108.
    [12] K. Balci, and V. Atalay, PCA for Gender Estimation: Which Eigenvectors Contribute?, ICPR2002. 2002, 3:363-366.
    [13] Wilhelm T., Bohme H. J., GroB H. M., Backhaus A., Statistical and Neural Methods for Vision-based Analysis of Facial Expressions and Gender, Conference Proceedings - IEEE InternationalConference on Systems, Man and Cybernetics. 2004, 3:2203-2208.
    [14] Alice J.O'Toole et al. The Perception of Face Gender: The Role of Stimulus Structure in Recognition and Classification. Memory and Cognition. 1997, 26:146-160.
    [15] Alice J.O'Toole, Thomas Vetter, Heinrich H.Bulthoff & Nikolaus F.Troje. The role of shape and texture information in sex classification. Technical Report No.23. 1995.
    [16]武勃,艾海舟,肖习攀,等.人脸的性别分类.计算机研究与发展. 2003, 40(11):1546-1553.
    [17] Nishino S., Sachiyo I., Matsuda A., Gender Determining Method using Thermography, Proceedings - International Conference on Image Processing, ICIP 2. 2004, 2961-2964.
    [18] H. C. Lian, B. L. Lu, E. Takikawa, et al. Gender Recognition Using a Min-max Modular Support Vector Machine. Proceedings of Advances in Natural Computation: First International Conference, ICNC 2005. 2005, 3611:438-441.
    [19] Phillips, P.J., H. Wechsler, J. Huang, and P. Rauss. The FERET database and evaluation procedure for face recognition algorithm, Image and Vision Computing. 1998, 16(5):295-306.
    [20] F. S. Samaria, and A. C. Harter. Parameterisation of a Stochastic Model for Human Face Identification. Proceedings of the 2nd IEEE Workshop on Applications of Computer Vision. 1994.
    [21] P. N. Bellhumer, J. Hespanha, and D. Kriegman. Eigenfaces vs. fisherfaces: Recognition using class specific linear projection. IEEE Transactions on Pattern Analysis and Machine Intelligence, Special Issue on Face Recognition. 1997, 17(7):711-720.
    [22] Terence Sim, Simon Baker, and Maan Bsat. The CMU Pose, Illumination and Expression Database. IEEE Transactions on Pattern Analysis and Machine Intelligence. 2003, 25(12):1615-1618.
    [23] Martiniz, A.M. and R. Benavente. The AR face database, CVC. 1998. Technical Report:24.
    [24] Wen Gao, Bo Cao, Shiguang Shan, Delong Zhou, Xiaohua Zhang, Debin Zhao. The CAS-PEAL Large-Scale Chinese Face Database and Evaluation Protocols. Technical Report: JDL_TR_04_FR_001, Joint Research & Development Laboratory, CAS, 2004.
    [25] Givens G, Beveridge J R, Draper B A, et al.. How features of the human face affect recognition: a statistical comparison of three face recognition algorithms.IEEE Computer Society Conference on Computer Vision and Pattern Recognition. 2004, 2:381-388.
    [26] Viola P., Jones M. J.“Robust Real-Time Face Detection”, International Journal of Computer Vision. 2004, 57(2):137-154.
    [27] C. Papageorgiou, M. Oren, T. Poggio, A General Framework for Object Detection, International Conference on Computer Vision. 1998.
    [28] R. Lienhart, A. Kuranov, V. Pisarevsky, Empirical Analysis of Detection Cascades of BoostedClassifiers for Rapid Object Detection, MRL(Microprocessor Research Lab, Intel Labs) Technical Report. 2002.
    [29] Freund Y., Schapire R. E.. A Decision-Theoretic Generalization of On-Line Learning and an Application to Boosting, Journal of Computer and System Sciences. 1997, 55(1):119-139.
    [30] R.E. Schapire, Y. Freund, P. Bartlett etc., Boosting the Margin: A New Explanation for the Effectiveness of Voting Methods. In Proc. Fourteenth International Conference on Machine Learning, 1997.
    [31] Viola P., Jones M. J.. Robust Real-Time Face Detection, International Journal of Computer Vision. 2004, 57(2):137-154.
    [32] Viola P., Jones M. J.. Rapid Object Detection using a Boosted Cascade of Simple Features, Computer Vision and Pattern Recognition. 2001, 1:8-14.
    [33] M. Weston. Biometric Evidence that Sexual Selection Has Shaped the Hominin Face Eleanor, PLoS ONE. 2007, 2(1):710.
    [34] V.F. Ferrario, C. Sforza, G. Pizzini, et al.. Sexual dimorphism in the human face assessed by Euclidean distance matrix analysis, Journal of Anatomy. 1993, 183:593-600.
    [35] Ashok Samal *, Vanitha Subramani, David Marx. Analysis of sexual dimorphism in human face, J. Vis. Commun. 2007, 18:453-463.
    [36] M. Kass, A. Witkin, and D. Terzopoulos. Snakes: Active contour models. Int. Journal of Computer Vision. 1988, 321-331.
    [37] T. F. Cootes, C. J. Taylor, D. H. Cooper, J. Haslam, Training Models of Shape from Sets of Examples, in Proc. British Machine Vision Conference. Springer-Verlag. 1992, 9- 18.
    [38] T. F. Cootes, G.J. Edwards, C.J. Taylor, Active Appearance Models, Proc. European Conf. Computer Vision. 1998, 1(2):484-698.
    [39] Snelick R, Uludag U, Mink A, et al.. Large scale evaluation of multimodal biometric authentication using state-of the-art systems, IEEE Transactions on Pattern Analysis and Machine intelligence. 2005, 450-455.
    [40] V. Vapnik. Nature of Statistical Learning Theory. John Wiley and Sons, Inc., New York.
    [41] J. C. Burges. A Tutorial on Support Vector Machines for Pattern Recognition. Bell Laboratories, Lucent Technologies. 1997.
    [42] C. C. Chang and C. J. Lin, LIBSVM: A library for support vector machines. National Taiwan University, No. 1, Roosevelt Rd. Sec. 4, Taipei, Taiwan 106, ROC, 2001. Software available at http://www.csie.ntu.edu.tw/~cjlin/libsvm.
    [43] Ueki K., Komatsu H., etc, A Method of Gender Classification by Integrating Facial, Hairstyle, and Clothing Images, Proceedings - International Conference on Pattern Recognition 4. 2004, 446-449.