Your browser does not support JavaScript!

Feedforward neural networks on massively parallel architectures

Feedforward neural networks on massively parallel architectures

For access to this article, please select a purchase option:

Buy chapter PDF
(plus tax if applicable)
Buy Knowledge Pack
10 chapters for $120.00
(plus taxes if applicable)

IET members benefit from discounts to all IET publications and free access to E&T Magazine. If you are an IET member, log in to your account and the discounts will automatically be applied.

Learn more about IET membership 

Recommend Title Publication to library

You must fill out fields marked with: *

Librarian details
Your details
Why are you recommending this title?
Select reason:
Hardware Architectures for Deep Learning — Recommend this title to your library

Thank you

Your recommendation has been sent to your librarian.

In this chapter, we present ClosNN, a specialized NoC for NNs based on the well-known Clos topology. Clos is perhaps the most popular Multistage Interconnection Network (MIN) topology. Clos is used commonly as a base of switching infrastructures in various commercial telecommunication and network routers and switches.

Chapter Contents:

  • 3.1 Related work
  • 3.2 Preliminaries
  • 3.3 ClosNN: a customized Clos for neural network
  • 3.4 Collective communications on ClosNN
  • 3.5 ClosNN customization and area reduction
  • 3.6 Folded ClosNN
  • 3.7 Leaf switch optimization
  • 3.8 Scaling to larger NoCs
  • 3.9 Evaluation
  • 3.9.1 Performance comparison under synthetic traffic
  • 3.9.2 Performance evaluation under realistic workloads
  • 3.9.3 Power comparison
  • 3.9.4 Sensitivity to neural network size
  • 3.10 Conclusion
  • References

Inspec keywords: parallel architectures; network-on-chip; feedforward neural nets

Other keywords: ClosNN; network routers; feedforward neural network; massively parallel architectures; multistage interconnection network topology; switching infrastructures; NoC; switches; MIN topology

Subjects: Neural computing techniques; Network-on-chip; Parallel architecture; Network-on-chip

Preview this chapter:
Zoom in

Feedforward neural networks on massively parallel architectures, Page 1 of 2

| /docserver/preview/fulltext/books/cs/pbcs055e/PBCS055E_ch3-1.gif /docserver/preview/fulltext/books/cs/pbcs055e/PBCS055E_ch3-2.gif

Related content

This is a required field
Please enter a valid email address