Your browser does not support JavaScript!

The need of in-memory computing

The need of in-memory computing

For access to this article, please select a purchase option:

Buy chapter PDF
(plus tax if applicable)
Buy Knowledge Pack
10 chapters for $120.00
(plus taxes if applicable)

IET members benefit from discounts to all IET publications and free access to E&T Magazine. If you are an IET member, log in to your account and the discounts will automatically be applied.

Learn more about IET membership 

Recommend Title Publication to library

You must fill out fields marked with: *

Librarian details
Your details
Why are you recommending this title?
Select reason:
ReRAM-based Machine Learning — Recommend this title to your library

Thank you

Your recommendation has been sent to your librarian.

In this computing paradigm, the computing unit can only process one task at a certain interval and wait for memory to update its results, because both data and instructions are stored in the same memory space, which greatly limits the throughput and causes idle power consumption. Although mechanisms like cache and branch prediction can partially eliminate the issues, the "memory wall" still poises a grand challenge for the massive data interchanging in modern processor technology. To break "memory wall," in-memory processing has been studied since 2000s and regarded as a promising way to reduce redundant data movement between mem-ory and processing unit and decrease power consumption. The concept has been implemented with different hardware tools, e.g., 3D-stack dynamic random access memory (DRAM) [69] and embedded FLASH [70]. Software solutions like pruning, quantization and mixed precision topologies are implemented to reduce the intensity of signal interchanging.

Chapter Contents:

  • 2.1 Introduction
  • 2.2 Neuromorphic computing devices
  • 2.2.1 Resistive random-access memory
  • 2.2.2 Spin-transfer-torque magnetic random-access memory
  • 2.2.3 Phase change memory
  • 2.3 Characteristics of NVM devices for neuromorphic computing
  • 2.4 IMC architectures for machine learning
  • 2.4.1 Operating principles of IMC architectures
  • In-macro operating schemes
  • Architectures for operating schemes
  • 2.4.2 Analog and digitized fashion of IMC
  • 2.4.3 Analog IMC
  • Analog MAC
  • Cascading IMC macros
  • Bitcell and array design of analog IMC
  • Peripheral circuitry of analog IMC
  • Challenges of analog IMC
  • Trade-offs of analog IMC devices
  • 2.4.4 Digitized IMC
  • 2.4.5 Literature review of IMC
  • DRAM-based IMCs
  • NAND-Flash-based IMCs
  • SRAM-based IMCs
  • ReRAM-based IMCs
  • STT-MRAM-based IMCs
  • SOT-MRAM-based IMCs
  • 2.5 Analysis of IMC architectures

Inspec keywords: flash memories; DRAM chips

Other keywords: software solutions; embedded FLASH; in-memory computing; memory wall; memory space; power consumption; in-memory processing; redundant data movement reduction; 3D-stack dynamic random access memory

Subjects: Semiconductor storage; Memory circuits

Preview this chapter:
Zoom in

The need of in-memory computing, Page 1 of 2

| /docserver/preview/fulltext/books/pc/pbpc039e/PBPC039E_ch2-1.gif /docserver/preview/fulltext/books/pc/pbpc039e/PBPC039E_ch2-2.gif

Related content

This is a required field
Please enter a valid email address