Your browser does not support JavaScript!

Hardware acceleration for recurrent neural networks

Hardware acceleration for recurrent neural networks

For access to this article, please select a purchase option:

Buy chapter PDF
(plus tax if applicable)
Buy Knowledge Pack
10 chapters for $120.00
(plus taxes if applicable)

IET members benefit from discounts to all IET publications and free access to E&T Magazine. If you are an IET member, log in to your account and the discounts will automatically be applied.

Learn more about IET membership 

Recommend Title Publication to library

You must fill out fields marked with: *

Librarian details
Your details
Why are you recommending this title?
Select reason:
Hardware Architectures for Deep Learning — Recommend this title to your library

Thank you

Your recommendation has been sent to your librarian.

This chapter focuses on the LSTM model and is concerned with the design of a high-performance and energy-efficient solution to implement deep learning inference. The chapter is organized as follows: Section 2.1 introduces Recurrent Neural Networks (RNNs). In this section Long Short Term Memory (LSTM) and Gated Recurrent Unit (GRU) network models are discussed as special kind of RNNs. Section 2.2 discusses inference acceleration with hardware. In Section 2.3, a survey on various FPGA designs is presented within the context of the results of previous related works and after which Section 2.4 concludes the chapter.

Chapter Contents:

  • 2.1 Recurrent neural networks
  • 2.1.1 Long short-term memory
  • Main concept of LSTMs
  • Steps in LSTM
  • Variants on the LSTM model
  • 2.1.2 Gated recurrent units
  • 2.2 Hardware acceleration for RNN inference
  • 2.2.1 Software implementation
  • 2.2.2 Hardware implementation
  • 2.3 Hardware implementation of LSTMs
  • 2.3.1 Model compression
  • 2.3.2 Datatype and Quantization
  • 2.3.3 Memory
  • 2.4 Conclusion
  • References

Inspec keywords: recurrent neural nets; field programmable gate arrays

Other keywords: LSTM model; deep learning inference; energy-efficient solution; gated recurrent unit network models; FPGA designs; RNN; recurrent neural networks; high-performance solution; GRU network models; hardware acceleration; long short term memory

Subjects: Neural computing techniques; Logic circuits; Logic and switching circuits

Preview this chapter:
Zoom in

Hardware acceleration for recurrent neural networks, Page 1 of 2

| /docserver/preview/fulltext/books/cs/pbcs055e/PBCS055E_ch2-1.gif /docserver/preview/fulltext/books/cs/pbcs055e/PBCS055E_ch2-2.gif

Related content

This is a required field
Please enter a valid email address