© The Institution of Electrical Engineers
An implementation of the back propagation (BP) scheme to train feedforward neural networks is presented. With the proposed implementation, the BP scheme can always itself determine the number of hidden nodes required to solve a particular problem. An illustration of the scheme is given.
References
-
-
1)
-
A.C. Tsoi
.
Multilayer perceptron trained using radial basis functions.
Electron. Lett.
,
1296 -
1297
-
2)
-
R.A. Jacobs
.
Increased rates of convergence through learning rate adaption.
Neural Netw.
,
295 -
307
-
3)
-
D.E. Rumelhart ,
G.B. Hinton ,
R.J. Williams ,
D.E. Rumelhart ,
J.L. Mcclelland
.
(1987)
Learning internal representations by error propagation, Parallel distributed processing, vol. 1.
-
4)
-
B. Widrow ,
R. Winter
.
Neural nets for adaptive filtering and adaptive pattern recognition.
IEEE Computer
,
25 -
39
-
5)
-
R.P. Gorman
.
Analysis of hidden units in a layered network trained to classify sonar targets.
Neural Netw.
,
75 -
89
-
6)
-
K. Hornik ,
M. Stinchcombe ,
H. White
.
Multilayer feed-forward networks are universal approximators.
Neural Netw.
,
359 -
366
-
7)
-
E. Barnard ,
D. Casasent
.
A comparison between criterion functions for linear classifiers, with an application to neural nets.
IEEE Trans.
,
1030 -
1041
-
8)
-
L.-W. Chan ,
F. Fallside
.
An adaptive training algorithm for back propagation networks.
Comput. Speech Lang.
,
205 -
218
http://iet.metastore.ingenta.com/content/journals/10.1049/el_19900847
Related content
content/journals/10.1049/el_19900847
pub_keyword,iet_inspecKeyword,pub_concept
6
6