http://iet.metastore.ingenta.com
1887

Quantisation and pooling method for low-inference-latency spiking neural networks

Quantisation and pooling method for low-inference-latency spiking neural networks

For access to this article, please select a purchase option:

Buy article PDF
£12.50
(plus tax if applicable)
Buy Knowledge Pack
10 articles for £75.00
(plus taxes if applicable)

IET members benefit from discounts to all IET publications and free access to E&T Magazine. If you are an IET member, log in to your account and the discounts will automatically be applied.

Learn more about IET membership 

Recommend to library

You must fill out fields marked with: *

Librarian details
Name:*
Email:*
Your details
Name:*
Email:*
Department:*
Why are you recommending this title?
Select reason:
 
 
 
 
 
Electronics Letters — Recommend this title to your library

Thank you

Your recommendation has been sent to your librarian.

Spiking neural network (SNN) that converted from conventional deep neural network (DNN) has shown great potential as a solution for fast and efficient recognition. A layer-wise quantisation method based on retraining is proposed to quantise the activation of DNN, which reduces the number of time steps required by converted SNN to achieve minimal accuracy loss. Pooling function is incorporated into convolutional layers to reduce at most 20% of spiking neurons. The converted SNNs achieved 99.15% accuracy on MNIST and 82.9% on CIFAR10 by only seven time steps, and only 10–40% of spikes need to be processed compared with networks using traditional algorithms. The experimental results show that the proposed methods are able to build hardware-friendly SNNs with ultra-low-inference latency.

References

    1. 1)
      • D. Neil , M. Pfeiffer , S.C. Liu .
        1. Neil, D., Pfeiffer, M., Liu, S.C.: ‘Learning to be efficient: algorithms for training low-latency, low-compute deep spiking neural networks’. ACM Symp. Applied Computing (SAC), Pisa, Italy, April 2016, pp. 293298.
        . ACM Symp. Applied Computing (SAC) , 293 - 298
    2. 2)
    3. 3)
      • P.U. Diehl , D. Neil , J. Binas .
        3. Diehl, P.U., Neil, D., Binas, J., et al: ‘Fast-classifying, high-accuracy spiking deep networks through weight and threshold balancing’. IEEE Int. Joint Conf. Neural Networks (IJCNN), Killarney, Ireland, July 2015, pp. 18.
        . IEEE Int. Joint Conf. Neural Networks (IJCNN) , 1 - 8
    4. 4)
      • B. Rueckauer , I.A. Lungu , Y. Hu .
        4. Rueckauer, B., Lungu, I.A., Hu, Y., et al: ‘Theory and tools for the conversion of analog to spiking convolutional neural networks’. arXiv preprint arXiv:1612.04052, 2016, pp. 19.
        . arXiv preprint arXiv:1612.04052 , 1 - 9
    5. 5)
    6. 6)
http://iet.metastore.ingenta.com/content/journals/10.1049/el.2017.2219
Loading

Related content

content/journals/10.1049/el.2017.2219
pub_keyword,iet_inspecKeyword,pub_concept
6
6
Loading
This is a required field
Please enter a valid email address