Your browser does not support JavaScript!
http://iet.metastore.ingenta.com
1887

Determining the number of hidden nodes by progressive training

Determining the number of hidden nodes by progressive training

For access to this article, please select a purchase option:

Buy article PDF
£12.50
(plus tax if applicable)
Buy Knowledge Pack
10 articles for £75.00
(plus taxes if applicable)

IET members benefit from discounts to all IET publications and free access to E&T Magazine. If you are an IET member, log in to your account and the discounts will automatically be applied.

Learn more about IET membership 

Recommend Title Publication to library

You must fill out fields marked with: *

Librarian details
Name:*
Email:*
Your details
Name:*
Email:*
Department:*
Why are you recommending this title?
Select reason:
 
 
 
 
 
Electronics Letters — Recommend this title to your library

Thank you

Your recommendation has been sent to your librarian.

An implementation of the back propagation (BP) scheme to train feedforward neural networks is presented. With the proposed implementation, the BP scheme can always itself determine the number of hidden nodes required to solve a particular problem. An illustration of the scheme is given.

References

    1. 1)
      • A.C. Tsoi . Multilayer perceptron trained using radial basis functions. Electron. Lett. , 1296 - 1297
    2. 2)
      • R.A. Jacobs . Increased rates of convergence through learning rate adaption. Neural Netw. , 295 - 307
    3. 3)
      • D.E. Rumelhart , G.B. Hinton , R.J. Williams , D.E. Rumelhart , J.L. Mcclelland . (1987) Learning internal representations by error propagation, Parallel distributed processing, vol. 1.
    4. 4)
      • B. Widrow , R. Winter . Neural nets for adaptive filtering and adaptive pattern recognition. IEEE Computer , 25 - 39
    5. 5)
      • R.P. Gorman . Analysis of hidden units in a layered network trained to classify sonar targets. Neural Netw. , 75 - 89
    6. 6)
      • K. Hornik , M. Stinchcombe , H. White . Multilayer feed-forward networks are universal approximators. Neural Netw. , 359 - 366
    7. 7)
      • E. Barnard , D. Casasent . A comparison between criterion functions for linear classifiers, with an application to neural nets. IEEE Trans. , 1030 - 1041
    8. 8)
      • L.-W. Chan , F. Fallside . An adaptive training algorithm for back propagation networks. Comput. Speech Lang. , 205 - 218
http://iet.metastore.ingenta.com/content/journals/10.1049/el_19900847
Loading

Related content

content/journals/10.1049/el_19900847
pub_keyword,iet_inspecKeyword,pub_concept
6
6
Loading
This is a required field
Please enter a valid email address