Associative memory via a sparse recovery model

Arya Mazumdar, Ankit Singh Rawat

Research output: Contribution to journalConference articlepeer-review

2 Scopus citations

Abstract

An associative memory is a structure learned from a datasetMof vectors (signals) in a way such that, given a noisy version of one of the vectors as input, the nearest valid vector fromM(nearest neighbor) is provided as output, preferably via a fast iterative algorithm. Traditionally, binary (or q-ary) Hopfield neural networks are used to model the above structure. In this paper, for the first time, we propose a model of associative memory based on sparse recovery of signals. Our basic premise is simple. For a dataset, we learn a set of linear constraints that every vector in the dataset must satisfy. Provided these linear constraints possess some special properties, it is possible to cast the task of finding nearest neighbor as a sparse recovery problem. Assuming generic random models for the dataset, we show that it is possible to store super-polynomial or exponential number of n-length vectors in a neural network of size O(n). Furthermore, given a noisy version of one of the stored vectors corrupted in near-linear number of coordinates, the vector can be correctly recalled using a neurally feasible algorithm.

Original languageEnglish (US)
Pages (from-to)2701-2709
Number of pages9
JournalAdvances in Neural Information Processing Systems
Volume2015-January
StatePublished - Jan 1 2015
Event29th Annual Conference on Neural Information Processing Systems, NIPS 2015 - Montreal, Canada
Duration: Dec 7 2015Dec 12 2015

Fingerprint

Dive into the research topics of 'Associative memory via a sparse recovery model'. Together they form a unique fingerprint.

Cite this