Training Algorithms for Networks of Spiking Neurons

Date

2014-12-08

Journal Title

Journal ISSN

Volume Title

Publisher

Abstract

Neural networks represent a type of computing that is based on the way that the brain performs computations. Neural networks are good at fitting non-linear functions and recognizing patterns. It is believed that biological neurons work similar to spiking neurons that process temporal information. In 2002, Bohte derived a backpropagation training algorithm (dubbbed as SpikeProp) for spiking neural networks (SNNs) containing temporal information as firing time of first spike. SpikeProp algorithm and its different variations were subject of many publications in the last decade.

SpikeProp algorithm works for continuous weight SNNs. Implementing continuous parameters on hardware is a difficult task. On the other hand implementing digital logic on hardware is more straightforward because of many available tools. Training SNN with discrete weights is tricky because smallest change allowed in weights is a discrete step. And this discrete step might affect the accuracy of the network by huge amount. Previous works have been done for Artificial Neural Networks (ANNs) with discrete weights but there is no research in the area of training SNNs with discrete weights. New algorithms have been proposed as part of this thesis work. These algorithms work well for training discrete weights in a spiking neural network. These new algorithms use SpikeProp algorithm for choosing weights that are to be updated. Several standard classification datasets have been used to demonstrate the efficacy of proposed algorithms. It is shown that one of the proposed algorithms (Multiple Weights Multiple Steps) takes less execution time to train and the results are comparable to continuous weight SNNs in terms of accuracy.

Description

Citation