Your browser does not support JavaScript!
http://iet.metastore.ingenta.com
1887

access icon free RAPIDO: a rejuvenating adaptive PID-type optimiser for deep neural networks

The authors present a novel gradient descent algorithm called RAPIDO for deep learning. It adapts over time and performs optimisation using current, past and future information similar to the PID controller. The proposed method is suited for optimising deep neural networks that consist of activation functions such as sigmoid, hyperbolic tangent and ReLU functions because it can adapt appropriately to sudden changes in gradients. They experimentally study the authors' method and show the performance results by comparing with other methods on the quadratic objective function and the MNIST classification task. The proposed method shows better performance than the other methods.

References

    1. 1)
      • 4. Hinton, G., Srivastava, N., Swersky, K.: ‘Lecture 6a overview of mini-batch gradient descent’, 2012, available at https://www.cs.toronto.edu/~hinton/coursera/lecture6/lec6.pdf.
    2. 2)
      • 3. Duchi, J., Hazan, E., Singer, Y.: ‘Adaptive subgradient methods for online learning and stochastic optimization’, J. Mach. Learn. Res., 2011, 12, pp. 21212159.
    3. 3)
      • 6. Kingma, D.P., Ba, J.L.: ‘Adam: a method for stochastic optimization’. Int. Conf. Learning Representations (ICLR), San Diego, CA, USA, San Diego, CA, USA, December 2015.
    4. 4)
    5. 5)
      • 5. Zeiler, M.D.: ‘ADADELTA: an adaptive learning rate method’, 2012, arXiv:1212.5701.
    6. 6)
http://iet.metastore.ingenta.com/content/journals/10.1049/el.2019.1593
Loading

Related content

content/journals/10.1049/el.2019.1593
pub_keyword,iet_inspecKeyword,pub_concept
6
6
Loading
This is a required field
Please enter a valid email address