用户名: 密码: 验证码:
多标记分类中的半监督降维和集成学习
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
多标记分类及其应用是当前机器学习和数据挖掘领域的热点问题,其中多标记维度约减和多标记集成分类是非常值得研究和探讨的两个方向。传统机器学习的研究对象是数据样本仅具有一个标记的单标记问题,而本文主要研究样本同时具有多个标记的多标记问题。论文研究了多标记分类、半监督学习、维度约减和集成学习的基本方法及其在各种数据集上的应用,并分别从数据预处理和分类器集成两个角度,研究了如何结合半监督学习对高维多标记数据进行维度约减和如何利用集成学习提高多标记分类的性能。
     实际中常遇到高维多标记数据仅有少量标记样本而大部分样本却没有标记的情况,为了有效去除冗余特征并使用未标记样本提供的潜在信息,将半监督学习引入到多标记维度约减中,提出基于半监督判别分析的多标记维度约减算法(MSDA)。该算法利用标记样本的属性图加权矩阵和部分标记的相似关联矩阵,最大化不同类别样本之间的分离度,同时使用未标记样本估计原始高维数据在低维数据流行上的内在几何结构。实验表明,MSDA算法在多个分类评价指标上的平均性能均优于其他方法,证实了算法的有效性。
     针对多标记数据的分类性能不理想的问题,将集成学习引入到多标记分类中,提出一种基于软成对约束投影的多标记集成算法(SPACME)。该算法通过重采样训练样本提供的软成对约束信息建立初始基分类器,利用获得的cannot-link集合和must-link集合构建约束投影矩阵,并将原始数据映射到新的数据空间表示,然后在转换后的数据集上使用权重更新策略迭代地训练一组基分类器以增加差异性,最后对多个基分类器的结果使用多数投票的方法输出标记集。实验表明,SPACME算法利用软成对约束信息明显提高了多标记数据的分类准确率等各项性能,且算法具有良好的健壮性。
Multi-label classification and its widespread applications have been currently heated research issues in the field of machine learning and data mining, among of which multi-label dimensionality reduction and multi-label ensemble classification are both two problems much worthy of being studied. In the traditional machine learning, researches mainly focus on single label problem in which only one label is assigned to every instance. However, as for us, one sample with several labels, called multi-label problem has attracted lots of our attentions. This thesis has investigated various methods of multi-label classification, semi-supervised learning, dimensionality reduction and ensemble learning, whose applications on different kinds of benchmark and practical datasets are also additionally explored. In the respective of data preprocessing and classifiers ensemble, this thesis concentrates on how to achieve effective dimensionality reduction of high-dimensional multi-label data with semi-supervised learning and how to improve performances of multi-label classification with ensemble learning.
     In real applications, it would often occur that there are many high-dimensional multi-label data with only a few labeled samples and large numbers of unlabeled samples. Aiming at eliminating redundant features and mining the latent information provided by the unlabeled samples, this thesis technically incorporates semi-supervised learning into multi-label dimensionality reduction and has presented a novel semi-supervised discriminant analysis based multi-label dimensionality reduction algorithm, called MSDA (Multi-label Semi-supervised Discriminant Analysis). The newly developed method attempts to maximize separability among different classes using the graph weighted matrix of sample attributes and the similarity correlation matrix of partial sample labels. Simultaneously, it tries to estimate the intrinsic geometric structure on the low-dimensional data manifold employing unlabeled data. Extensive experiments on general multi-label datasets show that MSDA performs better on several evaluation metrics when compared with other kinds of methods, which demonstrates the effectiveness of the proposed MSDA algorithm.
     When it comes to the performance of multi-label classification, this thesis has explored the possible way to enlarge diversity of multi-label base learners and proposed a multi-label ensemble algorithm based on soft pairwise constraint projection, called SPACME (Soft PAirwise Constraint projection for Multi-label Ensemble). With regard to this method, the soft pairwise constraint information provided by the data would be resampled so as to build an initial base learner. The generated cannot-link constraint set and must-link constraint set are both used to construct the projection matrix, which is designed to map the original data into a new data representation. Then it tries to iteratively train a number of base learners with the weights update function on the newly produced data, and as a result, the diversity of the base learners are enhanced. And the ultimate outputs of the ensemble are decided by majority voting combining multiply base classifiers. Empirically results significantly indicate the superiority of SPACME, which largely improves the classification accuracy and displays considerable robustness to varied situations.
引文
[1]Tsoumakas G, Katakis I, Vlahavas I. A review of multi-label classification methods. In:Proceedings of the 2nd ADBIS Workshop on Data Mining and Knowledge Discovery (ADMKD 2006), Thessaloniki, Greece,2006:99-109.
    [2]Tsoumakas G, Katakis I. Multi-label classification:an overview. International Journal of Data Warehousing and Mining,2007,3(3):1-13.
    [3]Tsoumakas G, Katakis I, Vlahavas I. Mining multi-label data. In:Data Mining and Knowledge Discovery Handbook, O. Maimon, L. Rokach (Ed.), Springer,2nd edition, 2010.
    [4]McCallum A. Multi-label text classification with a mixture model trained by EM. In:Working Notes of the AAAI'99 Workshop on Text Learning, Orlando, FL,1999.
    [5]Schapire R E, Singer Y. Boostexter:a boosting-based system for text categorization. Machine Learning,2000,39:135-168.
    [6]Qi G J, Hua X S, Rui Y, et al. Correlative multi-label video annotation. In: Proceedings of the 15th ACM International Conference on Multimedia, Augsburg, Germany,2007:17-26.
    [7]Elisseeff A, Weston J. A kernel method for multi-labelled classification. In:T.G. Dietterich, S. Becker, Z. Ghahramani (Eds.), Advances in Neural Information Processing Systems, vol.14, MIT Press, Cambridge, MA,2002:681-687.
    [8]Clare A, King R D. Knowledge discovery in multi-label phenotype data. In:L. De Raedt, A. Siebes (Eds.), Lecture Notes in Computer Science, Springer, Berlin, 2001(2168):42-53.
    [9]Comite F D, Gilleron R, Tommasi M. Learning multi-label alternating decision tree from texts and data. In:P. Perner, A. Rosenfeld (Eds.), Lecture Notes in Computer Science, vol.2734, Springer, Berlin,2003:35-49.
    [10]Crammer K, Singer Y. A new family of online algorithms for category ranking. In:Proceedings of the 25th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, Tampere, Finland, 2002:151-158.
    [11]Zhang M L, Zhou Z H. Multi-label neural networks with applications to functional genomics and text categorization. IEEE Transactions on Knowledge and Data Engineering,2006,18(10):1338-1351.
    [12]Li X C, Wang L, Sung E. Multi-label SVM active learning for image classification. In:Proceedings of IEEE International Conference on Image Processing (ICIP'04),2004:2207-2210.
    [13]Chen Ben-Hui, Ma Liang-Peng, Hu Jing-Lu. A new SVM based method for solving multi-label classification problem. In:Proceedings of the 3rd International Symposium on Computing Intelligence and Industrial Application (ISCIIA),2008: 325-334.
    [14]Boutell M R, Luo J, Shen X, et al. Learning multi-label scene classification. Pattern Recognition,2004,37 (9):1757-1771.
    [15]Godbole S, Sarawagi S. Discriminative methods for multi-labeled classification. In:H. Dai, R. Srikant, C. Zhang (Eds.), Lecture Notes in Artificial Intelligence, Springer, Berlin,2004(3056):22-30.
    [16]Kazawa H, Izumitani T, Taira H, et al. Maximal margin labeling for multi-topic text categorization. In:L.K. Saul, Y. Weiss, L. Bottou (Eds.), Advances in Neural Information Processing Systems, MIT Press, Cambridge, MA,2005(17):649-656.
    [17]Blum A, Mitchell T. Combining labeled and unlabeled data with co-training. In COLT98, Madison, WI, C 1998:92-100.
    [18]Zhu X J. Semi-supervised learning literature survey. Technique Report 1530, Department of Computer Sciences, University of Wisconsin at Madison, Madison, WI, 2006. [http://www.cs.wisc.edu/-jerryzhu/pub/ssl_survey.pdf]
    [19]Jolliffe I T. Principal Component Analysis. Springer-Verlag, New York,2nd edition,2002.
    [20]Fukunaga K. Introduction to statistical pattern recognition. Academic Press Professional,2nd edition,1990.
    [21]Yu K, Yu S, Tresp V. Multi-label informed latent semantic indexing. In Proceedings of the 28th annual international ACM SIGIR conference on Research and development in information retrieval (SIGIR'05), Salvador, Brazil,2005: 258-265.
    [22]Zhang Yin, Zhou Zhi-hua. Multi-label dimensionality reduction via dependency maximization. In:Proceedings of the 23rd AAAI Conference on Artificial Intelligence, Chicago, Illinois, USA,2008:1503-1505.
    [23]Gretton A, Bousquet, Smola O, et al. Measuring statistical dependence with Hilbert-schmidt norms. In:Proceedings of Algorithmic Learning Theory (ALT), LNAI 373.4,2005:63-67.
    [24]Park C H. Dimension reduction using least squares regression in multi-labeled text categorization. In:Proceedings of the 8th IEEE International Conference on Computer and Information Technology (CIT),2008:71-76.
    [25]Park C H, Lee M. On applying linear discriminant analysis for multi-labeled problems. Pattern recognition letters,2008,29(7):878-887.
    [26]Zhang Min-ling, Pena J M, Robles V. Feature selection for multi-label naive Bayes classification. Information Sciences,2009(179):3218-3229.
    [27]Goldberg D E, Genetic algorithms in search, optimization, and machine learning. Addison-Wesley, Boston, MA,1989.
    [28]Ji Shui-wang, Ye Jie-ping. Linear Dimensionality Reduction for Multi-label Classification. In:Proceedings of the 22nd International Conference on Artificial Intelligence (IJCAI'09),2009:1077-1082.
    [29]Zhang D Q, Zhou Z H, Chen S C. Semi-supervised dimensionality reduction. In: Proceedings of the 2007 SIAM International Conference on Data Mining (SDM07), 2007:629-634.
    [30]Song Y Q, Nie F P, Zhang C S. Semi-supervised sub-manifold discriminant analysis. Pattern Recognition Letters,2008,29:1806-1813.
    [31]Song Y Q, Nie F P, Zhang C S, et al. A unified framework for semi-supervised dimensionality reduction. Pattern Recognition,2008,41:2789-2799.
    [32]Li H, Jiang T, Zhang K. Efficient and robust feature extraction by maximum margin criterion. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2006,17(1):157-165.
    [33]He X, Niyogi P. Locality preserving projections. Advances in Neural Information Processing Systems (NIPS). MIT Press, Cambridge, MA, USA,2004,16:153-160.
    [34]He X, Yan S, Hu Y, et al. Face recognition using laplacianfaces. IEEE Transactions on Pattern Analysis and Machine Intelligence,2005,27(3):328-340.
    [35]Belkin M, Niyogi P, Sindhwani V. Manifold regularization:a geometric framework for learning from labeled and unlabeled examples. Journal of Machine Learning Research,2006,1(1):1-48.
    [36]Yang X, Fu H, Zha H, et al. Semi-supervised nonlinear dimensionality reduction. In:Proceedings of the International Conference on Machine Learning (ICML06), 2006:1065-1072.
    [37]Roweis S, Saul L. Nonlinear dimensionality reduction by locally linear embedding. Science,2000,290(5500):2323-2326.
    [38]Tenebaum J, Silva V, Langford J. A global geometric framework for nonlinear dimensionality reduction. Science,2000,290(5500):2319-2323.
    [39]Zha H, Zhang Z. Spectral analysis of alignment in manifold learning. In: Proceedings of IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP),2005(5):1069-1072.
    [40]Zhang Z, Zha H. Principle manifolds and nonlinear dimensionality reduction via tangent space alignment. Journal on Scientific Computing,2004,26(1):313-338.
    [41]Sugiyama M, Ide T, Nakajima S, et al. Semi-supervised local fisher discriminant analysis for dimensionality reduction. Machine Learning,2010,78(1-2):35-61.
    [42]Cai D, He X, Han J. Semi-supervised discriminant analysis. In:Proceedings of IEEE International Conference on Computer Vision (ICCV),2007:1-7.
    [43]Cevikalp H, Verbeek J, Jurie F, et al. Semi-supervised dimensionality reduction using pairwise equivalence constraints. In:Proceedings of the International Conference on Computer Vision Theory and Applications (CVJK),2008:489-496.
    [44]Zhang Y, Yeung D Y. Semi-supervised discriminant analysis via CCCP. In: Proceedings of the European Conference on Machine Learning and knowledge Discovery in Databases-Part Ⅱ,2008,5212:644-659.
    [45]Cheng H, Hua K A, Vu K, et al. Semi-supervised dimensionality reduction in image feature space. In:Proceedings of the 2008 ACM symposium on Applied Computing (SAC'08), March 16-20, Fortaleza, Brazil,2008:1207-1211.
    [46]韦佳,彭宏.基于局部和全局保持的半监督维数约减方法.软件学报,2008,19(11):2833-2842.
    [47]Yang J C, Yan S C, Huang T S. Ubiquitous supervised subspace learning. IEEE Transactions on Image Processing,2009,18(2):241-249.
    [48]He X, Cai D, Yan S, et al. Neighborhood preserving embedding. In:Proceedings of the 10th International Conference on Computer Vision,2005,2:1208-1213.
    [49]Hou C P, Zhang C S, Wu Y, et al. Multiple view semi-supervised dimensionality reduction. Pattern Recognition,2010,43:720-730.
    [50]Opitz D, Maclin R. Popular ensemble methods:An empirical study. Journal of Artificial Intelligence Research,1999,11:169-198.
    [51]Thomas G. Dietterich. Ensemble learning. In:The Handbook of Brain Theory and Neural Networks, Second Edition,2002.
    [52]Bauer E, Kohavi R. An empirical comparison of voting classification algorithms: Bagging, Boosting, and variants. Machine Learning,1999,36(1-2):105-139.
    [53]Schapire R E. The Boosting approach to machine learning:An overview. Nonlinear Estimation and Classification, Springer, Berlin,2003.
    [54]Ying K C, Lin S W, Lee Z J, et al. An ensemble approach applied to classify span e-mails. Expert Systems with Applications,2010,37:2197-2201.
    [55]Abellan J, Masegosa A R.An ensemble method using credal decision trees. European Journal of Operational Research,2010,205:218-226.
    [56]Vanessa G V, Jeronimo A G, Aniibal F V. Committees of Adaboost ensembles with modified emphasis functions. Neurocomputing,2010,73:1289-1292.
    [57]Zhou Z H, Jiang Y, Chen S F. Extracting symbolic rules from trained neural network ensembles. Artificial Intelligence Communications,2003,16(1):3-15.
    [58]Bennett K P, Demiriz A, Maclin R. Exploiting unlabeled data in ensemble methods. In:Proceedings of the Eighth ACM SIGKDD international Conference on Knowledge Discovery and Data Mining (KDD'02). ACM Press, New York, NY,2002: 289-296.
    [59]Zhang D Q, Chen S C, Zhou Z H, et al. Constraint projections for ensemble learning. In:Proceedings of the 23rd AAAI Conference on Artificial Intelligence (AAAI'08), Chicago,2008:758-763.
    [60]Loris N, Alessandra L. An experimental comparison of ensemble of classifiers for bankruptcy predition and credit scoring. Experts Systems with Applications,2009, 36:3028-3033.
    [61]Oza N C, Turner K. Classifier ensembles:Select real-world applications. Information Fusion,2008,9:4-20.
    [62]Zhang B F, Xu X, Su J S. An ensemble method for multi-class and multi-label text categorization. In:Proceedings of the International Conference on Intelligent System and Knowledge Engineering (ISKE),2007.
    [63]Nanni L. Ensemble of classifiers for protein fold recognition. Neurocomputing, 2006,69:850-853.
    [64]Akhand M, Monirul I M, Murase K. Progressive interactive training:A sequential neural network ensemble learning method. Neurocomputing,2009,73: 260-273.
    [65]Zhang C X, Zhang J S, Zhang G Y Using boosting to prune double-bagging ensembles. Computational Statistics and Data Analysis,2009,53:1218-1231.
    [66]Sun S L, Zhang C S. Subspace ensembles for classification. Physica A,2007, 385:199-207.
    [67]Tsoumakas G, Katakis I, Vlahavas I. Effective and efficient multi-label classification in domains with large number of labels. In:Proceedings of the ECML/PKDD 2008 Workshop on Mining Multidimensional Data (MMD'08),2008.
    [68]Barutcuoglu Z, Schapire R E, Troyanskaya O G. Hierarchical mutli-label prediction of gene function. Bioinfomatics,2006,22:830-836.
    [69]Cesa-Bianchi N, Gentile C, Zaniboni L. Hierarchical classification:combining bayes with S VM. In:Proceedings of the 23rd International Conference on Machine Learning (ICML'06),2006:177-184.
    [70]Zhang M L, Zhou Z H. ML-knn:A lazy learning approach to multi-label learning. Pattern Recognition,2007,40(7):2038-2048.
    [71]Yang S, Kim S K, Ro Y M. Semantic home photo categorization. IEEE Transactions on Circuits and Systems for Video Technology,2007,17:324-335.
    [72]Li T, Ogihara M. Toward intelligent music information retrieval. IEEE Transactions on Multimedia,2006,8:564-574.
    [73]Trohidis K, Tsoumakas G, Kalliris G, et al. Multi-label classification of music into emotions. In:Proceedings of the 9th International Conference on Music Iinformation Retrieval, Philadelphia, USA,2008:325-330.
    [74]Brinker K, Furnkranz J, Hullermeier E. A unified model for multilabel classification and ranking. In:Proceedings of the 17th European Conference on Artificial Intelligence, Riva del Garda, Italy,2006:489-493.
    [75]Clare A, King R. Knowledge discovery in multi-label phenotype data. In: Proceedings of the 5th European Conference on Principles of Data Mining and Knowledge Discovery, Freiburg, Germany,2001:42-53.
    [76]McCallum A. Multi-label text classification with a mixture model trained by EM. In:Proceedings of the AAAI'99 Workshop on Text Learning,1999:1-7.
    [77]Ueda N, Saito K. Parametric mixture models for multi-labeled text. In:Advances in Neural Information Processing Systems (NIPS),2003,15:721-728.
    [78]Ghamrawi N, McCallum A. Collective multi-label classification. In:Proceedings of the ACM Conference on Information and Knowledge Management, Bremen, Germany,2005:195-200.
    [79]Crammer K, Singer Y. A family of additive online algorithms for category ranking. Journal of Machine Learning Research,2003,3:1025-1058.
    [80]Thabtah F, Cowling P, Peng Y. Mmac:A new multi-class, multi-label associative classification approach. In:Proceedings of the 4th IEEE International Conference on Data Mining,2004:217-224.
    [81]李和平,胡占义,吴毅红等.基于半监督学习的行为建模与异常检测.软件学报,2007,18(3):527-537.
    [82]Nigam K, Ghani R. Analyzing the effectiveness and applicability of co-training. In:Proceedings of the Ninth International Conference on Information and Knowledge Management,2000:86-93.
    [83]Fujino A, Ueda N, Saito K. A hybrid generative/discriminative approach to semi-supervised classifier design. In:Proceedings of the 20th National Conference on Artificial Intelligence (AAAI'05),2005:764-769.
    [84]Zhou D, Bousquet O, Lal T, et al. Learning with local and global consistency. In: Advances in Neural Information Processing System (NIPS), Cambridge, MA, MIT Press,2004,16:321-328.
    [85]Belkin M, Matveeva I, Niyogi P. Regularization and semi-supervised learning on large graphs. In:Annual Conference on Computational Learning Theory (COLT2004), Lecture notes in computer science 3120, Springer,2004:624-638.
    [86]Belkin M, Niyogi P, Sindhwani V. Manifold regularization:A geometric framework for learning from examples. Technical Report TR-2004-06. University of Chicago,2004.
    [87]Liu Y, Jin R, Yang L. Semi-supervised multi-label learning by constrained non-negative matrix factorization. In:Proceedings of AAAI Conference on Artificial Intelligence, Boston,2006:666-671.
    [88]Zha Zheng-Jun, Mei Tao, Wang Jing-Dong, et al. Graph-based semi-supervised learning with multi-label. Journal of Visual Communication and Image Representation,2009,20(2):97-103.
    [89]陈晓峰,王世同,曹苏群.半监督多标记学习的基因功能分析.智能系统学报,2008,3(1):83-90.
    [90]Culp M, Michailidis G. Graph-based semi-supervised learning. IEEE Transactions on Pattern analysis and machine intelligence,2008,30(1):174-179.
    [91]van der Maaten L J P. An introduction to dimensionality reduction using Matlab.Technical report MICC 07-07. Maastricht University, the Netherlands,2007.
    [92]Van Der Maaten L J P, Postma E O, Van Den Herik H J. Dimensionality reduction:A comparative review. Publisher:Citeseer, Volume:10, Issue:February, 2007:1-41.
    [93]Yan S C, Xu D, Zhang B Y, et al. Graph embedding and extensions:A general framewok for dimensionality reduction. IEEE Transactions on Pattern Analysis and Machine Intelligence,2007,29(1):40-51.
    [94]Ye J P. Least squares linear discriminant analysis. In:Proceedings of the 24th International Conference on Machine Learning (ICML'07), Corvallis,2007: 1087-1093.
    [95]Li J, Allinson N M. Subspace learning-based dimensionality reduction in building recognition. Neurocomputing,2009,73:324-330.
    [96]Chen C, Zhang L J, Bu J J, et al. Constrained laplacian eigenmap for dimensionality reduction. Neurocomputing,2009,73(4):951-958.
    [97]Fan M Y, Qiao H, Zhang B. Intrinsic dimension estimation of manifolds by incising balls. Pattern Recognition,2009,42(5):780-787.
    [98]Tenenbaum J B. Mapping a manifold of perceptual observations. In:Advances in Neural Information Processing Systems (NIPS), Cambridge, MA, USA, MIT Press, 1998:682-688.
    [99]Belkin M, Niyogi P. Laplacian Eigenmaps and spectral techniques for embedding and clustering. In:Advances in Neural Information Processing Systems (NIPS), Cambridge, MA, USA, MIT Press,2002,14:585-591.
    [100]Zhang Z, Zha H. Principal manifolds and nonlinear dimensionality reduction via local tangent space alignment. Journal of Scientific Computing,2004,26(1): 313-338.
    [101]Lafon S, Lee A B. Diffusion maps and coarse-graining:A unified framework for dimensionality reduction, graph partitioning, and data set parameterization. IEEE Transactions on Pattern Analysis and Machine Intelligence,2006,28(9):1393-1403.
    [102]Shawe-Taylor J, Christianini N. Kernel methods for pattern analysis. Cambridge University Press, Cambridge, UK,2004.
    [103]Baudat G, Anouar F. Generalized discriminant analysis using a kernel approach. Neural Computation,2000,12(10):2385-2404.
    [104]Law M H, Jain A K. Incremental nonlinear dimensionality reduction by manifold learning. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2006,28(3):377-391.
    [105]Hinton G E, Roweis S T. Stochastic neighbor embedding. In:Advances in Neural Information Processing Systems (NIPS), Cambridge, MA, USA, MIT Press, 2002,15:833-840.
    [106]Hinton G E, Salakhutdinov R R. Reducing the dimensionality of data with neural networks. Science,2006,313(5786):504-507.
    [107]Levina E, Bickel P J. Maximum likelihood estimation of intrinsic dimension. In: Advances in Neural Information Processing Systems (NIPS), Cambridge, MA, USA, MIT Press,2004,17.
    [108]Newman D J, Hettich S, Blake C L, et al. UCI Repository of machine learning databases. Irvine, CA:University of California, Department of Information and Computer Science.1998. [http://www.ics.uci.edu/-mlearn/MLRepository.html]
    [109]Hansen L K, Salamon P. Neural network ensembles. IEEE Transactions on Pattern Analysis and Machine Intelligence,1990,12(10):993-1001.
    [110]Krogh A, Vedelsby J. Neural network ensembles, cross validation, and active learning. In:Advances in Neural Information Processing Systems (NIPS), Cambridage, MA, MIT Press,1995,7:231-138.
    [111]Breiman L. Bagging predictors. Machine Learning,1996,24(2):123-140.
    [112]Breiman L. Random forests. Machine Learning,2001,45(1):5-32.
    [113]Ho T K. The random subspace method for constructing decision forest. IEEE Transactions on Pattern Analysis and Machine Intelligence,1998,20(8):832-844.
    [114]Wolpert D H. Stacked generalization. Neural Networks,1992,5(2):241-260.
    [115]Witten I, Frank E. Data Mining:Practical machine learning tools and techniques. 2nd edition, Morgan Kaufman,2005.
    [116]Davidson I, Basu S. A survey of clustering with instance level constraints.2007. [http://www.cs.ucdavis.edu/-davidson/constrained-clustering/CAREER/Survey.pdf]
    [117]Kuncheva L. Combining Pattern Classifiers:Methods and Algorithms. Wiley, New York, USA.2004.
    [118]葛雷,李国正,尤鸣宇.多标记学习的嵌入式特征选择.南京大学学报(自然科学),2009,45(5):671-676.
    [119]姜远,佘悄悄,黎铭等.一种直推式多标记文档分类方法.计算机研究与发展,2008,45(11):1817-1823.
    [120]姜远,黎铭,周志华.一种基于半监督学习的多模态Web查询精化方法.计算机学报,2009,32(10):2099-2106.
    [121]缪志敏,赵陆文,胡谷雨等.基于单类分类器的半监督学习.模式识别与人工智能,2009,22(6):924-930.
    [122]许震,沙朝锋,王晓玲等.基于KL距离的非平衡数据半监督学习算法.计算机研究与发展,2010,47(1):81-87.
    [123]彭岩,张道强.半监督典型相关分析算法.软件学报,2008,19(11): 2822-2832.
    [124]潘志松,燕继坤.少数类的集成学习.南京航空航天大学学报,2009,41(4):520-526.
    [125]向坚,叶绿,朱红丽.基于子空间集成学习的三维人体运动识别.中国图形图像学报,2008,13(10):2003-2006.
    [126]高翠芳,吴小俊,张松顺.改进的半监督模糊聚类算法.控制与决策,2009,25(1):115-120.
    [127]朱凤梅,张道强.张量图像上的半监督降维算法.模式识别与人工智能,2009,22(4):574-580.
    [128]尹学松,胡恩良,陈松灿.基于成对约束的判别型半监督聚类分析.软件学报,2008,19(11):2791-2802.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700