Your browser does not support JavaScript!
http://iet.metastore.ingenta.com
1887

## Computation reuse-aware accelerator for neural networks

• Author(s):
• DOI:

$16.00 (plus tax if applicable) ##### Buy Knowledge Pack 10 chapters for$120.00
(plus taxes if applicable)

IET members benefit from discounts to all IET publications and free access to E&T Magazine. If you are an IET member, log in to your account and the discounts will automatically be applied.

Recommend Title Publication to library

You must fill out fields marked with: *

Librarian details
Name:*
Email:*
Name:*
Email:*
Department:*
Why are you recommending this title?
Select reason:

Hardware Architectures for Deep Learning — Recommend this title to your library

## Thank you

Your recommendation has been sent to your librarian.

Power consumption has long been a significant concern in neural networks. In particular, large neural networks that implement novel machine learning techniques require much more computation, and hence power, than ever before. In this chapter, we showed that computation reuse could exploit the inherent redundancy in the arithmetic operations of the neural network to save power. Experimental results showed that computation reuse, when coupled with the approximation property of neural networks, can eliminate up to 90% of multiplication, effectively reducing power consumption by 61%, on average in the presented architecture. The proposed computation reuse -aware design can be extended in several ways. First, it can be integrated into several state-of-the-art customized architectures for LSTM, spiking , and convolutional neural network models to further reduce power consumption. Second, we can couple computation reuse with existing mapping and scheduling algorithms toward developing reusable scheduling and mapping methods for neural network. Computation reuse can also boost the performance of the methods that eliminate ineffectual computations in deep learning neural networks . Evaluating the impact of CORN on reliability and customizing the CORN architecture for FPGA-based neural network implementation are the other future works in this line.

Chapter Contents:

• 7.1 Motivation
• 7.2 Baseline architecture
• 7.2.1 Computation reuse support for weight redundancy
• 7.2.2 Computation reuse support for input redundancy
• 7.3 Multicore neural network implementation
• 7.3.1 More than K weights per neuron
• 7.3.2 More than N neurons per layer
• 7.4 Experimental results
• 7.5 Conclusion and future work
• References

Inspec keywords:

Preview this chapter:

Computation reuse-aware accelerator for neural networks, Page 1 of 2

| /docserver/preview/fulltext/books/cs/pbcs055e/PBCS055E_ch7-1.gif /docserver/preview/fulltext/books/cs/pbcs055e/PBCS055E_ch7-2.gif

### Related content

content/books/10.1049/pbcs055e_ch7
pub_keyword,iet_inspecKeyword,pub_concept
6
6
This is a required field