Exploring the Feasibility of Using 3-D XPoint as an In-Memory Computing Accelerator

Masoud Zabihi, Salonik Resch, Husrev Cilasun, Zamshed I. Chowdhury, Zhengyang Zhao, Ulya R. Karpuzcu, Jian Ping Wang, Sachin S. Sapatnekar

Research output: Contribution to journalArticlepeer-review

2 Scopus citations


This article describes how 3-D XPoint memory arrays can be used as in-memory computing accelerators. We first show that thresholded matrix-vector multiplication (TMVM), the fundamental computational kernel in many applications including machine learning (ML), can be implemented within a 3-D XPoint array without requiring data to leave the array for processing. Using the implementation of TMVM, we then discuss the implementation of a binary neural inference engine. We discuss the application of the core concept to address issues such as system scalability, where we connect multiple 3-D XPoint arrays, and power integrity, where we analyze the parasitic effects of metal lines on noise margins. To assure power integrity within the 3-D XPoint array during this implementation, we carefully analyze the parasitic effects of metal lines on the accuracy of the implementations. We quantify the impact of parasitics on limiting the size and configuration of a 3-D XPoint array, and estimate the maximum acceptable size of a 3-D XPoint subarray.

Original languageEnglish (US)
Pages (from-to)88-96
Number of pages9
JournalIEEE Journal on Exploratory Solid-State Computational Devices and Circuits
Issue number2
StatePublished - Jan 1 2021

Bibliographical note

Funding Information:
This work was supported in part by the National Science Foundation (NSF) under Award CCF-1725420 and Award CCF-1763761.

Publisher Copyright:
© 2014 IEEE.


  • 3-D XPoint
  • in-memory computing
  • matrix-vector multiplication
  • neural network
  • phase-change memory (PCM)


Dive into the research topics of 'Exploring the Feasibility of Using 3-D XPoint as an In-Memory Computing Accelerator'. Together they form a unique fingerprint.

Cite this