Your browser does not support JavaScript!
http://iet.metastore.ingenta.com
1887

XIMA: the in-ReRAM machine learning architecture

XIMA: the in-ReRAM machine learning architecture

For access to this article, please select a purchase option:

Buy chapter PDF
$16.00
(plus tax if applicable)
Buy Knowledge Pack
10 chapters for $120.00
(plus taxes if applicable)

IET members benefit from discounts to all IET publications and free access to E&T Magazine. If you are an IET member, log in to your account and the discounts will automatically be applied.

Learn more about IET membership 

Recommend Title Publication to library

You must fill out fields marked with: *

Librarian details
Name:*
Email:*
Your details
Name:*
Email:*
Department:*
Why are you recommending this title?
Select reason:
 
 
 
 
 
ReRAM-based Machine Learning — Recommend this title to your library

Thank you

Your recommendation has been sent to your librarian.

ReRAM neural networks with focus on intensive matrix multiplication operations. ReRAM-crossbar network can be used as matrix-vector multiplication accelerator and then to illustrate the detailed mapping. The coupled ReRAM oscillator network can be applied for low-power and high-throughput L2-norm calculation. The 3D single-layer CMOS-ReRAM architecture will be used for tensorized neural network (TNN). A 3D multilayer CMOS-ReRAM architecture has advantages in three man-ifold. First, by utilizing ReRAM crossbar for input data storage, leakage power of memory is largely removed. In a 3D architecture with TSV interconnection, the bandwidth from this layer to next layer is sufficiently large to perform parallel computation. Second, ReRAM crossbar can be configured as computational units for the matrix-vector multiplication with high parallelism and low power. Lastly, with an additional layer of CMOS-ASIC, more complicated tasks such as division and non-linear mapping can be performed. As a result, the whole training process of ML can be fully mapped to the proposed 3D multilayer CMOS-ReRAM accelerator architecture towards real-time training and testing.

Chapter Contents:

  • 5.1 ReRAM network-based ML operations
  • 5.1.1 ReRAM-crossbar network
  • 5.1.1.1 Mapping of ReRAM crossbar for matrix–vector multiplication
  • 5.1.1.2 Performance evaluation
  • 5.1.2 Coupled ReRAM oscillator network
  • 5.1.2.1 Coupled-ReRAM-oscillator network for L2-norm calculation
  • 5.1.2.2 Performance evaluation
  • 5.2 ReRAM network-based in-memory ML accelerator
  • 5.2.1 Distributed ReRAM-crossbar in-memory architecture
  • 5.2.1.1 Memory-computing integration
  • 5.2.1.2 Communication protocol and control bus
  • 5.2.2 3D XIMA
  • 5.2.2.1 3D single-layer CMOS-ReRAM architecture
  • 5.2.2.2 3D multilayer CMOS-ReRAM architecture

Inspec keywords: matrix multiplication; parallel processing; tensors; resistive RAM; learning (artificial intelligence); neural net architecture; CMOS memory circuits

Other keywords: TNN; L2-norm calculation; matrix-vector multiplication; parallelism; 3D multilayer CMOS-ReRAM accelerator architecture; CMOS-ASIC; TSV interconnection; nonlinear mapping; parallel computation; tensorized neural network; ReRAM crossbar; ML; machine learning architecture

Subjects: Algebra; Digital storage; Algebra; Multiprocessing systems; Memory circuits

Preview this chapter:
Zoom in
Zoomout

XIMA: the in-ReRAM machine learning architecture, Page 1 of 2

| /docserver/preview/fulltext/books/pc/pbpc039e/PBPC039E_ch5-1.gif /docserver/preview/fulltext/books/pc/pbpc039e/PBPC039E_ch5-2.gif

Related content

content/books/10.1049/pbpc039e_ch5
pub_keyword,iet_inspecKeyword,pub_concept
6
6
Loading
This is a required field
Please enter a valid email address