Abstract
Neuro-inspired computing has made significant progress in recent years. However, its computation efficiency and hardware cost still lag behind the biological nervous system, especially during the training stage. This work targets to understand this gap from a neural motif perspective, particularly the feedforward inhibitory motif. Such a motif has been found in many cortical systems, presenting a vital role in sparse learning. This work first establishes a neural network model that emulates the insect's olfactory system, and then systematically studies various effects of the feedforward inhibitory motif. The performance and efficiency of the neural network models are evaluated through the handwritten digits recognition task, with and without the feedforward inhibitory motif. As demonstrated in the results, the utilization of the feedforward inhibitory motif is able to reduce the network size by > 3X at the same accuracy of 95% in handwritten digits recognition. Further simulation experiments reveal that the feedforward inhibition not only dynamically regulates the firing rate of excitatory neurons, promotes and stabilizes the sparsity, but also provides a coarse categorization of the inputs, which improves the final accuracy with a smaller, cascade structure. These results differentiate the feedforward inhibition path from previous understanding of the feedback inhibition, illustrating its functional importance for high computation and structure efficiency.
Original language | English (US) |
---|---|
Pages (from-to) | 141-151 |
Number of pages | 11 |
Journal | Neurocomputing |
Volume | 267 |
DOIs | |
State | Published - Dec 6 2017 |
Externally published | Yes |
Bibliographical note
Publisher Copyright:© 2017 Elsevier B.V.
Keywords
- Feedforward inhibition
- Handwritten recognition
- Hebbian learning
- Neural motif
- Sparse learning
- Spiking neural network