Supervised learning in temporally-coded spiking neural networks with approximate backpropagation

Andrew Stephan, Brian Gardner, Steven J. Koester, André Grüning

Research output: Contribution to journalArticlepeer-review

Abstract

—In this work we propose a new supervised learning method for temporally-encoded multilayer spiking networks to perform classification. The method employs a reinforcement signal that mimics backpropagation but is far less computationally intensive. The weight update calculation at each layer requires only local data apart from this signal. We also employ a rule capable of producing specific output spike trains; by setting the target spike time equal to the actual spike time with a slight negative offset for key high-value neurons the actual spike time becomes as early as possible. In simulated MNIST handwritten digit classification, two-layer networks trained with this rule matched the performance of a comparable backpropagation based non-spiking network.

Original languageEnglish (US)
JournalUnknown Journal
StatePublished - Jul 26 2020

Keywords

  • Backpropagation
  • Reinforcement
  • Spiking Neural Networks
  • Supervised Learning
  • Temporal Encoding

Fingerprint Dive into the research topics of 'Supervised learning in temporally-coded spiking neural networks with approximate backpropagation'. Together they form a unique fingerprint.

Cite this