© The Institution of Engineering and Technology
Quickprop is one of the most popular fast learning algorithms in training feed-forward neural networks. Its learning rate is fast; however, it is still limited by the gradient of the backpropagation algorithm and it is easily trapped into a local minimum. Proposed is a new fast learning algorithm to overcome these two drawbacks. The performance investigation in different learning problems (applications) shows that the new algorithm always converges with a faster learning rate compared with Quickprop and other fast learning algorithms. The improvement in global convergence capability is especially large, which increased from 4 to 100% in one learning problem.
References
-
-
1)
-
S.C. Ng ,
C.-C. Cheung ,
S.H. Leung
.
Magnified gradient function with deterministic weight evolution in adaptive learning.
IEEE Trans. Neural Netw.
,
6 ,
1411 -
1423
-
2)
-
Rumelhart, D.E., Hinton, G.E., Williams, R.J.: `Learning internal representations by error propagation', Parallel Distributed Processing: Exploration in the Microstructure of Cognition, 1986, 1.
-
3)
-
Fahlman, S.E.: `Fast learning variations on back-propagation: an empirical study', Proc. Connectionist Models Summer School, 1989, Morgan Kaufmann, San Mateo, CA, USA, p. 38–51.
-
4)
-
A. Frank ,
A. Asuncion
.
UCI machine learning repository.
-
5)
-
Riedmiller, M., Braun, H.: `A direct adaptive method for faster back-propagation learning: the RPROP Algorithm', Proc. Int. Conf. on Neural Networks, 1993, San Francisco, CA, USA, 1, p. 586–591.
http://iet.metastore.ingenta.com/content/journals/10.1049/el.2012.0947
Related content
content/journals/10.1049/el.2012.0947
pub_keyword,iet_inspecKeyword,pub_concept
6
6