Your browser does not support JavaScript!

Computation reuse-aware accelerator for neural networks

Computation reuse-aware accelerator for neural networks

For access to this article, please select a purchase option:

Buy chapter PDF
(plus tax if applicable)
Buy Knowledge Pack
10 chapters for $120.00
(plus taxes if applicable)

IET members benefit from discounts to all IET publications and free access to E&T Magazine. If you are an IET member, log in to your account and the discounts will automatically be applied.

Learn more about IET membership 

Recommend Title Publication to library

You must fill out fields marked with: *

Librarian details
Your details
Why are you recommending this title?
Select reason:
Hardware Architectures for Deep Learning — Recommend this title to your library

Thank you

Your recommendation has been sent to your librarian.

Power consumption has long been a significant concern in neural networks. In particular, large neural networks that implement novel machine learning techniques require much more computation, and hence power, than ever before. In this chapter, we showed that computation reuse could exploit the inherent redundancy in the arithmetic operations of the neural network to save power. Experimental results showed that computation reuse, when coupled with the approximation property of neural networks, can eliminate up to 90% of multiplication, effectively reducing power consumption by 61%, on average in the presented architecture. The proposed computation reuse -aware design can be extended in several ways. First, it can be integrated into several state-of-the-art customized architectures for LSTM, spiking , and convolutional neural network models to further reduce power consumption. Second, we can couple computation reuse with existing mapping and scheduling algorithms toward developing reusable scheduling and mapping methods for neural network. Computation reuse can also boost the performance of the methods that eliminate ineffectual computations in deep learning neural networks . Evaluating the impact of CORN on reliability and customizing the CORN architecture for FPGA-based neural network implementation are the other future works in this line.

Chapter Contents:

  • 7.1 Motivation
  • 7.2 Baseline architecture
  • 7.2.1 Computation reuse support for weight redundancy
  • 7.2.2 Computation reuse support for input redundancy
  • 7.3 Multicore neural network implementation
  • 7.3.1 More than K weights per neuron
  • 7.3.2 More than N neurons per layer
  • 7.4 Experimental results
  • 7.5 Conclusion and future work
  • References

Inspec keywords: learning (artificial intelligence); convolutional neural nets; power aware computing

Other keywords: arithmetic operations; spiking neural network; convolutional neural network; machine learning; computation reuse-aware accelerator; LSTM; power consumption; neural networks

Subjects: Performance evaluation and testing; Electrical/electronic equipment (energy utilisation); Neural computing techniques

Preview this chapter:
Zoom in

Computation reuse-aware accelerator for neural networks, Page 1 of 2

| /docserver/preview/fulltext/books/cs/pbcs055e/PBCS055E_ch7-1.gif /docserver/preview/fulltext/books/cs/pbcs055e/PBCS055E_ch7-2.gif

Related content

This is a required field
Please enter a valid email address