© The Institution of Engineering and Technology
In this study, reduced precision operations are investigated in order to improve the speed and energy efficiency of SNN implementation. Instead of using the 32-bit single-precision floating-point format, small floating-point format and fixed-point format are used to represent SNN parameters and to perform SNN operations. The analyses are performed on the training and inference of a leaky integrate-and-fire model-based SNN that is trained and used to classify the handwritten digits in MNIST database. The analysis results show that for SNN inference, the floating-point format with 4-bit exponent and 3-bit mantissa or the fixed-point format with 6-bit integer and 7-bit fraction can be used without any accuracy degradation. For training, a floating-point format with 5-bit exponent and 3-bit mantissa or a fixed-point format with 6-bit integer and 10-bit fraction can be used to obtain full accuracy. The proposed reduced precision formats can be used in SNN hardware accelerator design and the selection between floating-point and fixed-point can be determined by design requirements. A case study of SNN implementation on field-programmable gate array device is performed. With reduced precision numerical formats, memory footprint, computing speed, and resource utilisation are improved. As a result, the energy efficiency of SNN implementation is also improved.
References
-
-
1)
-
14. Jin, X., Furber, S.B., Woods, J.V.: ‘Efficient modelling of spiking neural networks on a scalable chip multiprocessor’. 2008 IEEE Int. Joint Conf. on Neural Networks (IEEE World Congress on Computational Intelligence), Hong Kong, China, June 2008, pp. 2812–2819.
-
2)
-
27. LeCun, Y., Cortes, C.: ‘MNIST handwritten digit database’, 2010.
-
3)
-
19. Alawad, M., Yoon, H., Tourassi, G.: ‘Energy efficient stochastic-based deep spiking neural networks for sparse datasets’. 2017 IEEE Int. Conf. on Big Data (Big Data), Boston, MA, USA, December 2017, pp. 311–318.
-
4)
-
8. Hopkins, M., Pineda-Garcia, G., Bogdan, P.A., et al: ‘Spiking neural networks for computer vision’, Interface. Focus., 2018, 8, (4), pp. 1–18.
-
5)
-
5. Gerstner, W., Kistler, W.M.: ‘Spiking neuron models single neurons, populations, plasticity’ (Cambridge University Press, Cambridge, UK, 2002).
-
6)
-
10. Schuman, C.D., Potok, T.E., Patton, R.M., et al: ‘A survey of neuromorphic computing and neural networks in hardware’, , May 2017.
-
7)
-
3. Hodgkin, A.L., Huxley, A.F.: ‘A quantitative description of membrane current and its application to conduction and excitation in nerve’, J. Physiol., 1952, 117, (4), pp. 500–544.
-
8)
-
23. Deng, Z., Xu, C., Cai, Q., et al: ‘Reduced-precision memory value approximation for deep learning’. , Hewlett Packard Labs, December 2015.
-
9)
-
4. Izhikevich, E.M.: ‘Simple model of spiking neurons’, IEEE Trans. Neural Netw., 2003, 14, (6), pp. 1569–1572.
-
10)
-
11)
-
18. Sen, S., Venkataramani, S., Raghunathan, A.: ‘Approximate computing for spiking neural networks’. Design, Automation Test in Europe Conf. Exhibition (DATE), Lausanne, Switzerland, March 2017, pp. 193–198.
-
12)
-
2. Ponulak, F., Kasinski, A.: ‘Introduction to spiking neural networks: information processing, learning and applications’, Acta Neurobiol. Exp., 2011, 71, (4), pp. 409–433.
-
13)
-
6. Bohte, S.M., Kok, J.N.: ‘Applications of spiking neural networks’, Inf. Process. Lett., 2005, 95, (6), pp. 519–520.
-
14)
-
13. Merolla, P.A., Arthur, J.V., Alvarez-Icaza, R., et al: ‘A million spiking-neuron integrated circuit with a scalable communication network and interface’, Science, 2014, 345, (6197), pp. 668–673.
-
15)
-
15. Maguire, L.P., McGinnity, T.M., Glackin, B., et al: ‘Challenges for large-scale implementations of spiking neural networks on FPGAs’, Neurocomputing, 2007, 71, (1), pp. 13–29.
-
16)
-
17)
-
22. Horowitz, M.: ‘1.1 computing's energy problem (and what we can do about it)’. 2014 IEEE Int. Solid-State Circuits Conf. Digest of Technical Papers (ISSCC), San Francisco, CA, USA, February 2014, pp. 10–14.
-
18)
-
9. Cao, Y., Chen, Y., Khosla, D.: ‘Spiking deep convolutional neural networks for energy-efficient object recognition’, Int. J. Comput. Vis., 2015, 113, (1), pp. 54–66.
-
19)
-
7. Bhuiyan, M.A., Jalasutram, R., Taha, T.M.: ‘Character recognition with two spiking neural network models on multicore architectures’. 2009 IEEE Symp. on Computational Intelligence for Multimedia Signal and Vision Processing, Nashville, TN, USA, March 2009, pp. 29–34.
-
20)
-
11. Indiveri, G., Corradi, F., Qiao, N.: ‘Neuromorphic architectures for spiking deep neural networks’. 2015 IEEE Int. Electron Devices Meeting (IEDM), Washington, DC, USA, December 2015, pp. 4.2.1–4.2.4.
-
21)
-
16. Krichmar, J.L., Coussy, P., Dutt, N.: ‘Large-scale spiking neural networks using neuromorphic hardware compatible models’, J. Emerging Technol. Comput. Syst., 2015, 11, (4), pp. 36:1–36:18.
-
22)
-
24. Lai, L., Suda, N., Chandra, V.: ‘Deep convolutional neural network inference with floating-point weights and fixed-point activations’, , March 2017.
-
23)
-
17. Caporale, N., Dan, Y.: ‘Spike timing-dependent plasticity: a Hebbian learning rule’, Annu. Rev. Neurosci., 2008, 31, pp. 25–46.
-
24)
-
20. Lee, Y., Choi, Y., Ko, S.-B., et al: ‘Performance analysis of bit-width reduced floating-point arithmetic units in FPGAs: a case study of neural network-based face detector’, EURASIP J. Embedded Syst., 2009, 2009, (1), pp. 4:1–4:11.
-
25)
-
26. Diehl, P.U., Neil, D., Binas, J., et al: ‘Fast-classifying, high-accuracy spiking deep networks through weight and threshold balancing’. 2015 Int. Joint Conf. on Neural Networks (IJCNN), Killarney, Ireland, July 2015, pp. 1–8.
-
26)
-
12. Walter, F., Röhrbein, F., Knoll, A.: ‘Neuromorphic implementations of neurobiological learning algorithms for spiking neural networks’, Neural Netw., 2015, 72, pp. 152–167.
-
27)
-
1. Maass, W.: ‘Networks of spiking neurons: the third generation of neural network models’, Neural Netw., 1997, 10, (9), pp. 1659–1671.
http://iet.metastore.ingenta.com/content/journals/10.1049/iet-cdt.2019.0115
Related content
content/journals/10.1049/iet-cdt.2019.0115
pub_keyword,iet_inspecKeyword,pub_concept
6
6