access icon free Efficient spiking neural network training and inference with reduced precision memory and computing

In this study, reduced precision operations are investigated in order to improve the speed and energy efficiency of SNN implementation. Instead of using the 32-bit single-precision floating-point format, small floating-point format and fixed-point format are used to represent SNN parameters and to perform SNN operations. The analyses are performed on the training and inference of a leaky integrate-and-fire model-based SNN that is trained and used to classify the handwritten digits in MNIST database. The analysis results show that for SNN inference, the floating-point format with 4-bit exponent and 3-bit mantissa or the fixed-point format with 6-bit integer and 7-bit fraction can be used without any accuracy degradation. For training, a floating-point format with 5-bit exponent and 3-bit mantissa or a fixed-point format with 6-bit integer and 10-bit fraction can be used to obtain full accuracy. The proposed reduced precision formats can be used in SNN hardware accelerator design and the selection between floating-point and fixed-point can be determined by design requirements. A case study of SNN implementation on field-programmable gate array device is performed. With reduced precision numerical formats, memory footprint, computing speed, and resource utilisation are improved. As a result, the energy efficiency of SNN implementation is also improved.

Inspec keywords: neural nets; reconfigurable architectures; floating point arithmetic; field programmable gate arrays

Other keywords: neuromorphic system designs; reduced precision memory; 5-bit exponent; reduced precision numerical formats; 7-bit fraction; SNN hardware implementations; SNN operations; SNN inference; 6-bit integer; fixed-point format; large-scale SNN models; SNN implementation; 10-bit fraction; spiking neuron models; energy efficiency; speed performance; leaky integrate-and-fire model-based SNN; 4-bit exponent; precision formats; reduced precision operations; 3-bit mantissa; human neural system; efficient spiking neural network training; SNN parameters; default 32-bit single-precision floating-point format; SNN hardware accelerator design; MNIST database

Subjects: Neural computing techniques; Digital arithmetic methods; Logic and switching circuits; Neural nets (circuit implementations); Logic circuits

References

    1. 1)
      • 14. Jin, X., Furber, S.B., Woods, J.V.: ‘Efficient modelling of spiking neural networks on a scalable chip multiprocessor’. 2008 IEEE Int. Joint Conf. on Neural Networks (IEEE World Congress on Computational Intelligence), Hong Kong, China, June 2008, pp. 28122819.
    2. 2)
      • 27. LeCun, Y., Cortes, C.: ‘MNIST handwritten digit database’, 2010.
    3. 3)
      • 19. Alawad, M., Yoon, H., Tourassi, G.: ‘Energy efficient stochastic-based deep spiking neural networks for sparse datasets’. 2017 IEEE Int. Conf. on Big Data (Big Data), Boston, MA, USA, December 2017, pp. 311318.
    4. 4)
      • 8. Hopkins, M., Pineda-Garcia, G., Bogdan, P.A., et al: ‘Spiking neural networks for computer vision’, Interface. Focus., 2018, 8, (4), pp. 118.
    5. 5)
      • 5. Gerstner, W., Kistler, W.M.: ‘Spiking neuron models single neurons, populations, plasticity’ (Cambridge University Press, Cambridge, UK, 2002).
    6. 6)
      • 10. Schuman, C.D., Potok, T.E., Patton, R.M., et al: ‘A survey of neuromorphic computing and neural networks in hardware’, ArXiv e-prints, arXiv: 1705.06963, May 2017.
    7. 7)
      • 3. Hodgkin, A.L., Huxley, A.F.: ‘A quantitative description of membrane current and its application to conduction and excitation in nerve’, J. Physiol., 1952, 117, (4), pp. 500544.
    8. 8)
      • 23. Deng, Z., Xu, C., Cai, Q., et al: ‘Reduced-precision memory value approximation for deep learning’. HPL-2015-100, Hewlett Packard Labs, December 2015.
    9. 9)
      • 4. Izhikevich, E.M.: ‘Simple model of spiking neurons’, IEEE Trans. Neural Netw., 2003, 14, (6), pp. 15691572.
    10. 10)
      • 25. ‘IEEE Standard for Floating-Point Arithmetic’, IEEE Std 754-2008, August 2008, pp. 1–70.
    11. 11)
      • 18. Sen, S., Venkataramani, S., Raghunathan, A.: ‘Approximate computing for spiking neural networks’. Design, Automation Test in Europe Conf. Exhibition (DATE), Lausanne, Switzerland, March 2017, pp. 193198.
    12. 12)
      • 2. Ponulak, F., Kasinski, A.: ‘Introduction to spiking neural networks: information processing, learning and applications’, Acta Neurobiol. Exp., 2011, 71, (4), pp. 409433.
    13. 13)
      • 6. Bohte, S.M., Kok, J.N.: ‘Applications of spiking neural networks’, Inf. Process. Lett., 2005, 95, (6), pp. 519520.
    14. 14)
      • 13. Merolla, P.A., Arthur, J.V., Alvarez-Icaza, R., et al: ‘A million spiking-neuron integrated circuit with a scalable communication network and interface’, Science, 2014, 345, (6197), pp. 668673.
    15. 15)
      • 15. Maguire, L.P., McGinnity, T.M., Glackin, B., et al: ‘Challenges for large-scale implementations of spiking neural networks on FPGAs’, Neurocomputing, 2007, 71, (1), pp. 1329.
    16. 16)
      • 21. NVIDIA Tesla P100 Whitepaper, WP-08019-001 ed., NVIDIA, 2016.
    17. 17)
      • 22. Horowitz, M.: ‘1.1 computing's energy problem (and what we can do about it)’. 2014 IEEE Int. Solid-State Circuits Conf. Digest of Technical Papers (ISSCC), San Francisco, CA, USA, February 2014, pp. 1014.
    18. 18)
      • 9. Cao, Y., Chen, Y., Khosla, D.: ‘Spiking deep convolutional neural networks for energy-efficient object recognition’, Int. J. Comput. Vis., 2015, 113, (1), pp. 5466.
    19. 19)
      • 7. Bhuiyan, M.A., Jalasutram, R., Taha, T.M.: ‘Character recognition with two spiking neural network models on multicore architectures’. 2009 IEEE Symp. on Computational Intelligence for Multimedia Signal and Vision Processing, Nashville, TN, USA, March 2009, pp. 2934.
    20. 20)
      • 11. Indiveri, G., Corradi, F., Qiao, N.: ‘Neuromorphic architectures for spiking deep neural networks’. 2015 IEEE Int. Electron Devices Meeting (IEDM), Washington, DC, USA, December 2015, pp. 4.2.14.2.4.
    21. 21)
      • 16. Krichmar, J.L., Coussy, P., Dutt, N.: ‘Large-scale spiking neural networks using neuromorphic hardware compatible models’, J. Emerging Technol. Comput. Syst., 2015, 11, (4), pp. 36:136:18.
    22. 22)
      • 24. Lai, L., Suda, N., Chandra, V.: ‘Deep convolutional neural network inference with floating-point weights and fixed-point activations’, arXiv e-prints arXiv:1703.03073, March 2017.
    23. 23)
      • 17. Caporale, N., Dan, Y.: ‘Spike timing-dependent plasticity: a Hebbian learning rule’, Annu. Rev. Neurosci., 2008, 31, pp. 2546.
    24. 24)
      • 20. Lee, Y., Choi, Y., Ko, S.-B., et al: ‘Performance analysis of bit-width reduced floating-point arithmetic units in FPGAs: a case study of neural network-based face detector’, EURASIP J. Embedded Syst., 2009, 2009, (1), pp. 4:14:11.
    25. 25)
      • 26. Diehl, P.U., Neil, D., Binas, J., et al: ‘Fast-classifying, high-accuracy spiking deep networks through weight and threshold balancing’. 2015 Int. Joint Conf. on Neural Networks (IJCNN), Killarney, Ireland, July 2015, pp. 18.
    26. 26)
      • 12. Walter, F., Röhrbein, F., Knoll, A.: ‘Neuromorphic implementations of neurobiological learning algorithms for spiking neural networks’, Neural Netw., 2015, 72, pp. 152167.
    27. 27)
      • 1. Maass, W.: ‘Networks of spiking neurons: the third generation of neural network models’, Neural Netw., 1997, 10, (9), pp. 16591671.
http://iet.metastore.ingenta.com/content/journals/10.1049/iet-cdt.2019.0115
Loading

Related content

content/journals/10.1049/iet-cdt.2019.0115
pub_keyword,iet_inspecKeyword,pub_concept
6
6
Loading