Your browser does not support JavaScript!
http://iet.metastore.ingenta.com
1887

Hardware and software techniques for sparse deep neural networks

Hardware and software techniques for sparse deep neural networks

For access to this article, please select a purchase option:

Buy chapter PDF
$16.00
(plus tax if applicable)
Buy Knowledge Pack
10 chapters for $120.00
(plus taxes if applicable)

IET members benefit from discounts to all IET publications and free access to E&T Magazine. If you are an IET member, log in to your account and the discounts will automatically be applied.

Learn more about IET membership 

Recommend Title Publication to library

You must fill out fields marked with: *

Librarian details
Name:*
Email:*
Your details
Name:*
Email:*
Department:*
Why are you recommending this title?
Select reason:
 
 
 
 
 
Hardware Architectures for Deep Learning — Recommend this title to your library

Thank you

Your recommendation has been sent to your librarian.

Over the past four decades, every generation of processors has delivered 2x performance boost, as predicted by Moore's law. Ironically, the end of Moore's law occurred at almost the same time as computationally intensive deep learning algorithms were emerging. Deep neural networks (DNNs) offer state-of-the-art solutions for many applications, including computer vision, speech recognition, and natural language processing. However, this is just the tip of the iceberg. Deep learning is taking over many classic machine -learning applications and also creating new markets, such as autonomous vehicles, which will tremendously amplify the demand for even more computational power.

Chapter Contents:

  • 6.1 Introduction
  • 6.2 Different types of sparsity methods
  • 6.3 Software approach for pruning
  • 6.3.1 Hard pruning
  • 6.3.2 Soft pruning, structural sparsity, and hardware concern
  • 6.3.3 Questioning pruning
  • 6.4 Hardware support for sparsity
  • 6.4.1 Advantages of sparsity for dense accelerator
  • 6.4.2 Supporting activation sparsity
  • 6.4.3 Supporting weight sparsity
  • 6.4.3.1 Cambricon-X
  • 6.4.3.2 Bit-Tactical
  • 6.4.3.3 Neural processing unit
  • 6.4.4 Supporting both weight and activation sparsity
  • 6.4.4.1 Efficient inference engine
  • 6.4.4.2 ZeNA
  • 6.4.4.3 Sparse convolutional neural network
  • 6.4.5 Supporting output sparsity
  • 6.4.5.1 SnaPEA
  • 6.4.5.2 Uniform Serial Processing Element
  • 6.4.5.3 SparseNN
  • 6.4.5.4 ComPEND
  • 6.4.6 Supporting value sparsity
  • 6.4.6.1 Bit-pragmatic
  • 6.4.6.2 Laconic architecture
  • 6.5 Conclusion
  • References

Inspec keywords: learning (artificial intelligence); neural nets

Other keywords: sparse deep neural networks; hardware techniques; software techniques; deep neural networks; machine learning; computationally intensive deep learning algorithms

Subjects: Neural computing techniques

Preview this chapter:
Zoom in
Zoomout

Hardware and software techniques for sparse deep neural networks, Page 1 of 2

| /docserver/preview/fulltext/books/cs/pbcs055e/PBCS055E_ch6-1.gif /docserver/preview/fulltext/books/cs/pbcs055e/PBCS055E_ch6-2.gif

Related content

content/books/10.1049/pbcs055e_ch6
pub_keyword,iet_inspecKeyword,pub_concept
6
6
Loading
This is a required field
Please enter a valid email address