基于粗糙集的粒度神经网络研究
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
作为粒计算的三个主要模型之一,粗糙集理论可以直接对数据进行分析和推理,从中发现隐含的知识,揭示潜在的规律。因此,是一种天然的数据挖掘方法。而作为数据挖掘的另一种经典方法,神经网络是一种模仿生物神经网络行为特征,进行分布式并行信息处理的智能计算模型。鉴于粗糙集和神经网络在信息处理方式、知识获取方式、抑制噪声能力、泛化测试能力等方面有很多互补之处,将两者集成的粒度神经网络模型作为智能集成系统的一个新的重要分支,已成为该领域的研究热点问题之一。
     本文从两个方面研究了基于粗糙集的粒度神经网络模型。一种方式是使用粗糙集作为前置系统,利用属性约简算法对数据集进行粒度约简,以简化神经网络的结构,提高神经网络的训练速度和预测精度。另一种方式是利用粗糙集及其扩展模型来提取决策规则,根据提取的规则来定义粒度神经元及其连接权值,实现粗糙集和神经网络的无缝融合。此外,本文还研究了每一种粒度神经网络模型的极速学习算法,该算法通过数学变换实现了学习过程的一次性完成。本文的主要研究内容包括以下几个方面:
     1.在保证分类能力不变的基础上,通过粗糙集理论中的属性约简算法对训练样本集进行粒度约简。根据约简后的训练集,优化粒度BP神经网络的结构,加快网络的训练速度,提高网络的泛化能力。针对传统BP算法训练速度慢、易陷入局部最小和过拟合等问题,本文提出了基于具有全局搜索能力的量子微粒群算法来自适应地确定粒度BP网络的隐层神经元个数、连接权值和阈值等参数。
     2.利用粗糙集和AP聚类算法来优化粒度RBF神经网络结构,提出了一种新的粒度RBF神经网络模型。在该模型中,利用无需任何先验知识的AP聚类算法对约简后的数据集进行聚类,将聚类后得到的中心及其宽度传递给粒度RBF网络隐层中的RBF单元。然后,对每一个约简后的样本实例,计算隐层中RBF单元的输出,并利用传统RBF算法训练粒度RBF网络。
     3.提出了一种改进的极速学习算法,用来优化单隐层粒度神经网络模型。该算法利用AP聚类自适应地确定极速学习算法中的隐层节点个数,并以聚类中心和宽度构造了新的激活函数(高斯函数)。利用该算法来优化粒度BP神经网络和粒度RBF神经网络,实现这两种粒度神经网络在统一框架下的自适应极速学习,以此建立了具有自适应极速学习能力的单隐层粒度神经网络模型。
     4.根据属性约简和值约简后的数据集提取的决策规则,建立了一种新的粒度神经网络模型——粗规则粒度神经网络。在该模型中,规则匹配层取代了传统神经网络结构中的隐层,该层的每一个粒度神经元代表一条决策规则,并依据规则的前件和后件初始化输入连接权值和输出连接权值。最后,利用极速学习算法进一步调整输出连接权值,以提高网络的分类能力。
     5.考虑到决策规则还应具有一定的容错能力,基于变精度粗糙集理论,提出了粒度双神经元网络及其学习算法。在该模型中,中间层神经元都为粒度双神经元结构,用来表示每条决策规则的上近似和下近似。最后,利用极速学习算法调整网络的连接权值,以提高网络的分类能力。此外,为提高粒度双神经元处理大规模数据集的能力,本文引入AP聚类对数据集进行粒化处理,提出了基于AP聚类的粒度双神经元网络优化方法。
     全文的主要工作是提出了几种基于粗糙集的粒度神经网络模型及其学习算法,并通过实验验证了网络结构及其学习算法的有效性。
Rough sets theory, as one of the three main models of granular computing, canfind hidden knowledge and reveal potential law by analyzing and reasoning on thedata directly. Therefore, it's a kind of natural data mining method. As another classicmethod of data mining, neural networks is a mathematical model for distributed andparallel information processing by imitating the behavioral characteristics ofbiological neural networks. Rough sets and neural networks have manycomplementarities in information processing, knowledge acquisition, noisesuppression capability and generalization ability. So, granular neural networksintegrated advantages of rough sets and neural networks, as a new important branch ofintelligent integrated system, has become one of hot topics in the domain of intelligentinformation processing.
     The dissertation researched two integrated modes of rough sets and the neuralnetworks. One was that rough sets was regarded as a front-end processor, using itsattribute reduction algorithm to compress the dimensions of information space, tosimplify the structure of the neural network, improve neural network training speedand prediction accuracy. Another was that rough sets was used to extract decisionrules to define the granular neurons, determine the structure of neural networks and itsconnection weights which achieves the seamless integration of rough sets theory andneural networks. In addition, this dissertation also studied the extreme learningalgorithm of each integrated mode, completed learning process through mathematicaltransform. The main works of this dissertation included the following aspects:
     1. On basis of guaranteeing the classification ability unchanged, simplify thetraining data set through attribute reduction algorithm of rough sets theory. Then, thereduced training set was used to optimize the structure of BP neural networks,accelerate its training speed, and improve its generalization ability. In view of thetraditional BP algorithm has some inherent vice, such as slow training speed, localminimum and over fitting problem, this dissertation proposed a new method todetermine adaptively weights and thresholds of granular BP neural network throughquantum-behaved particle swarm algorithm which has global search ability.
     2. This dissertation presented a new model of granular RBF neural networksbased on rough sets and AP clustering algorithm. In this model, AP clusteringalgorithm, which doesn't need any prior knowledge, was used to cluster the reducted data set. Then, the centers and their widths obtained by AP algorithm were transmitedto RBF units in the hidden layer of granular RBF network. After that, the outputs ofRBF units in the hidden layer were calculated, and granular RBF networks weretrained by the traditional RBF learning algorithm.
     3. When granular BP networks and granular RBF networks had a single hiddenlayer structure, this dissertation proposed an adaptive extreme learning algorithm tooptimize the connection weights and thresholds value. In this algorithm, AP clusteringalgorithm was used to determine adaptively the numbers of the neurons in the hiddenlayer, and obtain clustering centers and their withds which were defined the Gaussfunctions to be regarded as the new activation functions of the hidden layer.
     4. According to extracted decision rules through the algorithms of attributereduction and value reduction, this dissertation proposed a new granular neuralnetwork model, called rough rule granular neural networks. In this model, rulematching layer replaced the hidden layer of traditional neural networks. Each neuronof rule matching layer represented a decision rule. Input weights and output weightswere initialized according to front components and latter components of rules. Then,the output weights were adjusted further by extreme learning algorithm to improve theclassification ability of the networks.
     5. Considering decision rules should have the ability of fault-tolerant, thisdissertation proposed granular double neural networks and its learning algorithmbased on variable precision rough set model and extreme learining algorithm.In thismodel, neurons of the middle layer and the output layer were all granular doubleneurons which included upper approximation neuron and lower approximation neuronto represent the upper approximation and lower approximation of each rule. Finally,the output weights were adjusted further by extreme learning algorithm to improve theclassification ability of the networks. In addition, in order to improve the capability ofgranular double neural networks when processing mass data set, this dissertationproposed an optimized method based on AP clustering algorithm.
     The dissertation studied several granular neural networks models and theirlearing algorithms, and verified the effectiveness of these models by experiments.
引文
[1] L.A. Zadeh. Fuzzy sets and information granularity [C]. In: M. Gupta, R. Ragade,R.Yager (eds.). Advances in Fuzzy Set Theory and Application. Amsterdam:North-Holland,1979
    [2] L.A. Zadeh. Fuzzy logic=computing with words [J]. IEEE Transactions on FuzzySystems,1996,4(2):103-111
    [3] L.A. Zadeh. Towards a theory of fuzzy information granulation and its centralityin human reasoning and fuzzy logic [J]. Fuzzy Sets and Systems,1997,90(2):111-127
    [4] L.A. Zadeh. Some reflections on soft computing, granular computing and theirroles in the conception, design and utilization of information/intelligent systems [J].Soft Computing,1998,2(1):23-25
    [5] J.R. Hobbs. Granularity [C]. Proceedings of the Ninth International JointConference on Artificial Intelligenee,1985
    [6] R.R. Yager, D. Filev. Operations for granular computing: mixing words withnumbers[C].Proceedings of1998IEEE International Conference on Fuzzy Systems,1998
    [7] T.Y Lin. Gradular Computing[C]. Announcement of the BISC Special InterestGroup on Gradular Computing,1997
    [8] Z. Pawlak, J. Grzymala-Busse, R. Slowinski, et al. Rough Sets[J]. Communicationof the ACM,1995,38(11):89-95.
    [9]张拔,张铃.问题的求解理论及应用[M].北京:清华大学出版社,2007.
    [10] S. Haykin,叶世伟,史忠植.神经网络原理[M].北京:机械工业出版社,2004.
    [11] X.Z. Xu, S.F. Ding, Z.P. Zhao, H. Zhu. Particle Swarm Optimization forAutomatic Parameters Determination of Pulse Coupled Neural Network [J]. Journal ofComputers,2011,6(8):1546-1553.
    [12] S.F. Ding, C.Y. Su, J.Z. Yu. An optimizing BP neural network algorithm based ongenetic algorithm [J]. Artificial Intelligence Review,2011,36(2):153-162.
    [13] S.F. Ding, L. Xu, C.Y. Su, F.X. Jin. An optimizing method of RBF neuralnetwork based on genetic algorithm[J]. Neural Computing and Applications,2012,21(2):333-336.
    [14] S.F. Ding, W.K Jia, C.Y. Su, L.W. Zhang. Research of Neural Network AlgorithmBased on Factor Analysis and Cluster Analysis. Neural Computing and Applications,2011,20(2):297-302.
    [15] S.F. Ding, L. Xu, C.Y. Su, H. Zhu. Using genetic algorithms to optimize artificialneural networks[J]. Journal of Convergence Information Technology,2010,5(8):54-62.
    [16] F.P Da. Fuzzy neural network sliding mode control for long delay time systemsbased on fuzzy prediction[J]. Neural Computing and Applications,2008,17(5-6):531-539.
    [17] S.F. Ding, J.R. Chen, X.Z. Xu. Rough Neural Networks: A Review[J]. Advancesin Information Sciences and Service Sciences,2011,3,(7):332-339.
    [18] Y.Q. Zhang, M.D.Fraser, R.A. Gagliano, A. Kandel. Granular neural networks fornumerical-linguistic data fusion and knowledge discovery [J]. IEEE Transactions onNeural Networks,2000,11(3):658-667
    [19] M. Syeda, Y.Q. Zhang, Y. Pan. Parallel granular neural networks for fast creditcard fraud detection [A]. Proceedings of the2002IEEE International Conference[C],2002,572–577
    [20] A. Vasilakos, D. Stathakis. Granular neural networks for land use classification[J]. Soft Computing,2005,9(5):332-340
    [21] Y.Q. Zhang, B. Jin, Y.C. Tang. Genetic Granular Neural Networks [J].Lecture Notes in Computer Science,2007,4492:510-515
    [22] Y.Q. Zhang, S. Akkaladevi, G.J. Vachtsevanos, T.Y. Lin. Granular neural webagents for stock prediction [J]. Soft Computing,2002,6(5):406-413
    [23] M. Milan, M. Du an. Approximation and Prediction of Wages Based onGranular Neural Network [J]. Rough Sets and Knowledge Technology.2008,5009:556-563
    [24] Y.Q. Zhang, B. Jin, Y.C. Tang. Granular Neural Networks With EvolutionaryInterval Learning[J]. IEEE Transactions Fuzzy Systems2008,16(2):309-319
    [25] D.F. Leite, P. Costa, F. Gomide. Evolving granular classification neuralnetworks[C].International Joint Conference on Neural Networks,2009,1736-1743
    [26] D.F. Leite, P. Costa, F. Gomide. Evolving granular neural network forsemi-supervised data stream classification [C].International Joint Conference onNeural Network s,2010,1-8
    [27] A. Ganivada, S. Dutta, S. Pal. Fuzzy rough granular neural networks, fuzzygranules, and classification[J]. Theoretical Computer Science,2011,412(42):5834-5853
    [28] S.F. Ding, W.K. Jia, C.Y. Su, L.W. Zhang. Research of Neural NetworkAlgorithm Based on Factor Analysis and Cluster Analysis [J]. Neural Computing andApplications,2011,20(2):297-302
    [29] X.Z. Xu, S.F. Ding, W.K. Jia, G. Ma, F.X. Jin. Research of assembling optimizedclassification algorithm by neural network based on Ordinary Least Squares (OLS)[J].Neural Computing and Applications,(Online First)
    [30] K.C. Kwak. A Development of Cascade Granular Neural Networks [J]. IEICETransactions on Information and Systems,2011, E94D (7):1515-1518
    [31]廖英,张绍勇,尹大伟等.基于粗糙集遗传神经网络的柴油机故障诊断[J].控制工程,2009,16(6):709-712
    [32]张喜斌,成立,余江民.基于一种新型粗糙集神经网络的故障诊断[J].计算机应用研究,2006,23(5):156-158
    [33]丁浩,丁世飞.基于粗糙集的属性约简研究进展[J].计算机工程与科学,2010,32(6):92-94.
    [34] R. Bryll, R. Gutierrez-Osuna, F. Quek. Attribute bagging: improving accuracy ofclassifier ensembles by using random feature subse [J].Pattern Recognition,2003,36(3):1291-1302.
    [35]凌锦江,陈兆乾,周志华.基于特征选择的神经网络集成方法[J].复旦学报(自然科学版),2004,43(5):685-688.
    [36]张东波,王耀南.一种粗逻辑神经网络研究[J].电子与信息学报,2007,29(3):611-615
    [37]张东波.粗集神经网络集成方法及其在模式识别中的应用[D].湖南大学,2007:6-7
    [38]张东波.基于近似域的可变离散精度粗逻辑网络及其遥感图像分类应用[J].电子与信息学报,2007,29(11):2720-2723
    [39]张东波,王耀南.基于变精度粗糙集的粗集神经网络[J].电子与信息学报,2008.30(8):1913-1916
    [40]张东波,王耀南.基于模糊粗神经网络的图像脉冲噪声滤除[J].计算机应用,2005.25(10):2336-2338
    [41]常志玲,王全喜.一种新的粗糙神经网络的构造算法研究[J].计算机时代,2009(4):51-53
    [42]王玮,蔡莲红.基于粗集理论的神经网络[J].计算机工程,2001,27(5):19-21
    [43] G. Wei. Application of rough set and fuzzy neural network in informationhandling [A]. In: International Conference on Networking and Digital Society.2009:36-39
    [44] W. Pedrycz, G. Vukovich. Granular neural networks [J].Neuro Computing,2001,36(1-4):205-224
    [45] A. Skowron. Toward intelligent systems: calculi of information granules [A].Proceeding of JSAI2001[C],2001:251-260
    [46] A. Skowron. Approximate reasoning by agents [A]. Proceeding ofCEEMAS2001[C],2002:3-14
    [47] J.F. Peters, M.S. Szczuka. Rough neurocomputing:a survey of basic models ofneuro Computation [A]. Proceeding of RSCTC2002[C],2002:308-315
    [48]史忠植.智能科学[M].北京:清华大学出版社,2006.
    [49]马义德,李廉,绽绲,王兆滨.脉冲耦合神经网络与数字图像处理[M].北京:科学出版社,2008.
    [50]史忠植.神经网络[M].北京:高等教育出版社,2009.
    [51] G. B. Huang, Q. Y. Zhu, C. K. Siew. Extreme learning machine: Theory andapplications [J]. Neuro Computing,2006,70(1-3):489-501
    [52] L. Yuan, C.S. Yeng, G.B. Huang. Two-stage extreme learning machine forregression [J]. Neuro Computing,2010,73:3028-3038
    [53] H.J. Rong, Y.S. Ong, A.H. Tan. A fast pruned-extreme learning machine forclassification problem[J]. Neuro Computing,2008,72:359-366
    [54] T. Simila, J. Tikka. Multiresponse sparse regression with application tomultidimensional scaling[C]. Proceeding soft the15th International Conference onArticial Neural Networks: Formal Models and Their Applications(ICANN2005),2005,3697:97-102
    [55] L. Yuan, C.S. Yeng, G.B. Huang. Two-stage extreme learning machine forregression [J]. Neuro Computing,2010,73:3028-3038
    [56]刘英敏,吴沧浦.多层前馈神经网络隐单元数目上界的证明[J].北京理工大学学报,2000,20(1):73-76
    [57] G.B. Huang, L. Chen. Enhanced random search based incremental extremeLearning machine[J]. Neuro Computing,2008,71:3060-3068
    [58] G.B. Huang, L. Chen. Convexin cremental extreme learning machine [J]. NeuroComputing,2007,70:3056-3062
    [59]商琳,王金根,姚望舒,等.一种基于多进化神经网络的分类方法[J].软件学报,2005,16(9):1577-1583
    [60] G. Feng, G.B. Huang, Q. Lin, R. Gay. Error minimized extreme learning Machinewith growth of hidden nodes and incremental learning [J]. IEEE Transactions onNeural Networks,2009,20(8):1352-1357
    [61] G. Feng, G.B. Huang, Q. Lin, R. Gay. Error minimized extreme learning Machinewith growth of hidden nodes and incremental learning [J]. IEEE Transactions onNeural Networks,2009,20(8):1352-1357
    [62]魏海坤,徐嗣鑫,宋文忠. RBF网络学习的进化优选算法[J].控制理论与应用,2000,17(4):604-608
    [63] D.B. Fogel. Evolutionary Computation: Toword a New Philosophy of MachineIntelligence[J],2nd edition, IEEE Press,2000
    [64] Y. Lan, Y.C. Soh, G.B. Huang. Constructive hidden nodes selection of extremeLearning machine for regression [J].Neuro Computing,2010,73(16):3191-3199
    [65] N. Benoudjit, C. Archambeau, A. Lendasse, etc. Width optimization of theGaussian kernels in radial basis function networks [A]. Burges, Belgium:Proceedingsof the European Symposium on Artificial Neural Networks,2002,425-432
    [66] D.N.G. Silva, L.D.S. Pacifico, T.B. Ludermir. An evolutionary extreme learningmachine based on group search optimization [C].2011IEEE Congress onEvolutionary Computation,2011,574-580
    [67] X. Zhang, H.L. Wang. Selective forgetting extreme learning machine and itsapplication to time series prediction [J]. Acta Physica Sinica,2011,60(8)
    [68] Y.G. Wang, F.L. Cao, Y.B. Yuan. A study on effectiveness of extreme learningmachine [J]. Neuro Computing,2011,74(16):2483-2490
    [69] F. Fernandez-Navarro, C. Hervas-Martinez, J. Sanchez-Monedero, P.A. Gutierrez.MELM-GRBF: A modified version of the extreme learning machine for generalizedradial basis function neural networks [J]. Neuro Computing,2011,74(16):2502-2510
    [70] J.M. Martinez-Martinez, P. Escandell-Montero, E. Soria-Olivas, J.D.Martin-Guerrero, R.Magdalena-Benedito, J.Gomez-Sanchis. Regularized extremelearning machine for regression problems [J]. Neuro Computing,2011,74(17):3716-3721
    [71] A. A. Mohammed, R. Minhas, Q. M. J. Wu, M. A. Sid-Ahmed. Human facerecognition based on multidimensional PCA and extreme learning machine [J].Pattern Recognition,2011,44(10-11):2588-2597
    [72] X. Zhang, H.L. Wang. Incremental regularized extreme learning machine basedon Cholesky factorization and its application to time series prediction [J]. ActaPhysica Sinica,2011,60(11)
    [73]丁世飞.人工智能[M].北京:清华大学出版社,2010
    [74]苗夺谦,李道国.粗糙集理论、算法与应用[M].北京:清华大学出版社,2008
    [75] A. E. Bryson, Y. C. Ho. Applied Optimal Control[M]. New York: Blaisdell,1969.
    [76] P. J.Werbos. Beyond regression: New tools for prediction and analysis in thebehavioral sciences[D]. Cabridge,MA: Harvard University,1974.
    [77] M. A. Paradiso. A theory for the use of visual orientation information whichexploits the columnar structure of striate cortex[J]. Biol. Cybernet,1988,58:35-49.
    [78] D. E. Rumelhart, G. E. Hinton, R. J. Williams. Learning representations ofback-propagation[M]. Cambridge, MA: MIT Press,1986.
    [79] S.Y.S. Leung, Yang Tang, W.K. Wong. A hybrid particle swarm optimization andits application in neural networks[J]. Expert Systems with Applications,2012,39(1):395-405.
    [80] H.Y. Chen, J.J. Leou. Saliency-directed color image interpolation using artificialneural network and particle swarm optimization[J]. Journal of Visual Communicationand Image Representation,2012,23(1):343-358.
    [81] C. Robert, L.F. Wang, M. Alam. Training neural networks using Central ForceOptimization and Particle Swarm Optimization: Insights and comparisons[J]. ExpertSystems with Applications,2012,39(1):555-563.
    [82] S. Y. S. Leung, Y. Tang, W. K. Wong. A hybrid particle swarm optimization andits application in neural networks[J]. Expert Systems with Applications,2012,39(1):395-405.
    [83] J. Kennedy, R. Eberhart. Particle swarm optimization [A]. Proceedings of IEEEinternational conference on neural network[C]. Perth: IEEE Press,1995,4,1942-1948
    [84] Y. Shi, R. Eberhart. A modified particle swarm optimizer [A]. Proceedings ofIEEE International Conference on Evolutionary Computation[C]. Piscataway: IEEEPress,1998,69-73
    [85] J. Sun, B. Feng, W.B. Xu. Particle Swarm Optimization with Particles HavingQuantum Behavior[A]. Proceedings of2004Congress on Evolutionary Computation[C]. NJ: IEEE Press,2004,1,325-331
    [86]奚茂龙,孙俊,吴勇.一种二进制编码的量子粒子群优化算法[J].控制与决策,2010,25(1):99-104
    [87] UCI Machine Learning Repository,2011. http://archive.ics.uci.edu/mls
    [88] M. J. D. Powell. Radial basis functions for multiavariable interpolation[A]. IMAconference on algorithms for the approximation of functions and data[C]. Shrivenham,England,1985:143-167.
    [89] Q.S. Cao, D. Liu, Y.H. He, J.H. Zhou, J. Codrington. Nondestructive andquantitative evaluation of wire rope based on radial basis function neural networkusing eddy current inspection[J]. NDT&E International,2012,46:7-13.
    [90] H. Pomares, I. Rojas, M. Awad, O. Valenzuela. An enhanced clustering functionapproximation technique for a radial basis function neural network[J]. Mathematicaland Computer Modelling,2012,55(3-4):286-302.
    [91] N. A. Al-geelani, M. A. M. Piah, R. Q. Shaddad. Characterization of acousticsignals due to surface discharges on H.V. glass insulators using wavelet radial basisfunction neural networks[J]. Applied Soft Computing,2012,12(4):1239-1246.
    [92] H. S. Park, W. Pedrycz, Y. D. Chung, S. K. Oh. Modeling of the chargingcharacteristic of linear-type superconducting power supply using granular-based radialbasis function neural networks[J]. Expert Systems with Applications,2012,39(1):1021-1039.
    [93] C.K. Lin. Radial basis function neural network-based adaptive critic control ofinduction motors[J]. Applied Soft Computing,2011,11(3):3066-3074.
    [94] B.J. Frey, D. Dueck. Clustering by passing messages between data points [J].Science,2007,315(5814):972–976
    [95]王开军,张军英,李丹,张新娜,郭涛.自适应仿射传播聚类[J].自动化学报,2007,33(12):1242-1246.
    [96]肖宇,于剑.基于近邻传播算法的半监督聚类[J].软件学报,2008,19(11):2803-2813.
    [97] K.Z. Mao. RBF neural network center selection based on Fisher ratio classseparability measure. IEEE Trans. Neural Networks,2002,13(5):1211-1217
    [98] K.Z. Mao, G.B. Huang. Neuron selection for RBF neural network classifier basedon data structure preserving criterion [J]. IEEE Trans. Neural Networks,2005,16(6):1531-1540
    [99]邓万宇,郑庆华,陈琳.神经网络极速学习方法研究[J].《计算机学报》,2010,33(2):280-287
    [100] G.B. Huang. Learning capability and storage capacity of two-hidden-layer feedforward network [J]. IEEE Transactions on Neural Networks,2003,14(2):274-281
    [101] N.Y. Liang, G.B. Huang. A fast and accurate online sequential learningalgorithm for feed forward networks [J]. IEEE Transactions on Neural Networks,2006,17(6):1411-1423
    [102] G.B. Huang, D.H. Wang, Y. Lan. Extreme Learning Machines: A Survey [J].International Journal of Machine Leaning and Cybernetics,2011,2(2):107-122
    [103]胡丹,莫智文.基于规则提取的粗-模糊神经网络及其应用[J].模式识别与人工智能,2001,14(3):327-331.
    [104] A. Quteishat, C. P. Lim. A modified fuzzy min–max neural network with ruleextraction and its application to fault detection and classification[J]. Applied SoftComputing,2008,8(2):985-995
    [105] D.B. Zhang and Yaonan Wang Rough Neural Network Based on Bottom-UpFuzzy Rough Data Analysis[J]. Neural Processing Letters,2009,30(3):187-211
    [106] H. Kahramanli, N. Allahverdi. Rule extraction from trained adaptive neuralnetworks using artificial immune systems[J]. Expert Systems with Applications,2009,36(2):1513-1522.
    [107] M. H. Mohamed. Rules extraction from constructively trained neural networksbased on genetic algorithms[J].Neurocomputing,2011,74(17):3180-3192
    [108] U. Markowska-Kaczmar, W. Trelak. Fuzzy logic and evolutionary algorithm-two techniques in rule extraction from neural networks [J]. Neurocomputing,2005,63:359-379.
    [109] C.J. Mantas, J.M. Puche, J.M. Mantas. Extraction of similarity based fuzzy rulesfrom artificial neural networks[J]. International Journal of Approximate Reasoning,2006,43(2):202-221
    [110] W. Ziarko. Variable Precision Rough Set Model [J]. Journal of Computer andSystem Sciences,1993,46(1):39-59
    [111] A.J. An, N. Shan, C. Chan, et al. Discovering Rules for Water DemandPrediction: An Enhanced Rough-set Approach [J]. Engineering Applications ofArtificial Intelligence,1996,9(6):645-653
    [112] P. Lingra. Comparison of neofuzzy and rough neural networks [J]. InformationScience,1998,110:207-215
    [113]张兆礼,孙圣和.粗神经网络及其在数据融合中的应用[J].控制与决策,2001,16(1):76-78
    [114]杨立才,贾磊.粗神经网络及其在交通流预测中的应用[J].公路交通科技,2004,21(10):95-98
    [115]李华雄,王国胤等.决策粗糙集理论及其研究进展[M].北京:科学出版社,2011
    [116] Y. Du, Q.H. Hu, P.F. Zhu, P.J. Ma. Rule Learning for Classification Based onNeighborhood Covering Reduction[J]. Information Sciences,2011,181(24):5457–5467