用户名: 密码: 验证码:
Fast learning network: a novel artificial neural network with a fast learning speed
详细信息    查看全文
  • 作者:Guoqiang Li (1) (2)
    Peifeng Niu (1) (2)
    Xiaolong Duan (1) (2)
    Xiangye Zhang (1) (2)
  • 关键词:Artificial neural network ; Fast learning network ; Extreme learning machine ; Least squares
  • 刊名:Neural Computing & Applications
  • 出版年:2014
  • 出版时间:June 2014
  • 年:2014
  • 卷:24
  • 期:7-8
  • 页码:1683-1695
  • 全文大小:
  • 参考文献:1. Green M, Ekelund U, Edenbrandt L, Bj?rk J, Forberg JL, Ohlsson M (2009) Exploring new possibilities for case-based explanation of artificial neural network ensembles. Neural Netw 22:75-1 CrossRef
    2. May RJ, Maier HR, Dandy GC (2010) Data splitting for artificial neural networks using SOM-based stratified sampling. Neural Netw 23:283-94 CrossRef
    3. Kiranyaz S, Ince T, Yildirim A, Gabbouj M (2009) Evolutionary artificial neural networks by multi-dimensional particle swarm optimization. Neural Netw 22:1448-462 CrossRef
    4. Huang G-B, Zhu Q-Y, Siew C-K (2006) Extreme learning machine: theory and applications. Neurocomputing 70:489-01 CrossRef
    5. Suresh S, Venkatesh Babu R, Kim HJ (2009) No-reference image quality assessment using modified extreme learning machine classifier. Appl Soft Comput 9:541-52 CrossRef
    6. Li G, Niu P (2011) An enhanced extreme learning machine based on ridge regression for regression. Neural Comput Appl. doi:10.1007/s00521-011-0771-7
    7. Romero E, Alquézar R (2012) Comparing error minimized extreme learning machines and support vector sequential feed-forward neural networks. Neural Netw 25:122-29 CrossRef
    8. Zhu Q-Y, Qin AK, Suganthan PN, Huang G-B (2005) Evolutionary extreme learning machine. Pattern Recogn 38:1759-763 CrossRef
    9. Huynh HT, Won Y (2008) Small number of hidden units for ELM with two-stage linear model. IEICE Trans Inform Syst 91-D:1042-049 CrossRef
    10. He M (1993) Theory, application and related problems of double parallel feedforward neural networks. Ph.D. thesis, Xidian University, Xi’an
    11. Wang J, Wu W, Li Z, Li L (2011) Convergence of gradient method for double parallel feedforward neural network. Int J Numer Anal Model 8:484-95
    12. Tamura S, Tateishi M (1997) Capabilities of a four-layered feedforward neural network: four layers versus three. IEEE Trans Neural Netw 8:251-55 CrossRef
    13. Huang G-B (1998) Learning capability of neural networks. Ph.D. thesis, Nanyang Technological University, Singapore
    14. Huang G-B (2003) Learning capability and storage capacity of two-hidden-layer feedforward networks. IEEE Trans Neural Netw 14:274-81 CrossRef
    15. Huang G-B, Chen L, Siew C-K (2006) Universal approximation using incremental networks with random hidden computation nodes. IEEE Trans Neural Netw 17:879-92 CrossRef
    16. Rao CR, Mitra SK (1971) Generalized inverse of matrices and its applications. Wiley, New York
    17. Serre D (2002) Matrices: theory and applications. Springer, New York
    18. Liang N-Y, Huang G-B, Saratchandran P, Sundararajan N (2006) A fast and accurate online sequential learning algorithm for feedforward networks. IEEE Trans Neural Netw 17:1411-423 CrossRef
    19. Lan Y, Soh YC, Huang G-B (2010) Two-stage extreme learning machine for regression. Neurocomputing 73:3028-038 CrossRef
    20. Xu C, Lu J, Zheng Y (2006) An experiment and analysis for a boiler combustion optimization on efficiency and NO / x emissions. Boil Technol 37:69-4
  • 作者单位:Guoqiang Li (1) (2)
    Peifeng Niu (1) (2)
    Xiaolong Duan (1) (2)
    Xiangye Zhang (1) (2)

    1. Key Lab of Industrial Computer Control Engineering of Hebei Province, Yanshan University, Qinhuangdao, 066004, China
    2. National Engineering Research Center for Equipment and Technology of Cold Strip Rolling, Qinhuangdao, 066004, China
  • ISSN:1433-3058
文摘
This paper proposes a novel artificial neural network called fast learning network (FLN). In FLN, input weights and hidden layer biases are randomly generated, and the weight values of the connection between the output layer and the input layer and the weight values connecting the output node and the input nodes are analytically determined based on least squares methods. In order to test the FLN validity, it is applied to nine regression applications, and experimental results show that, compared with support vector machine, back propagation, extreme learning machine, the FLN with much more compact networks can achieve very good generalization performance and stability at a very fast training speed and a quick reaction of the trained network to new observations. In addition, in order to further test the FLN validity, it is applied to model the thermal efficiency and NO x emissions of a 330?WM coal-fired boiler and achieves very good prediction precision and generalization ability at a high learning speed.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700