Benchmarking Deep Legendre-SNN for Time Series Classification – Analysis and Enhancements
Files
TR Number
Date
Journal Title
Journal ISSN
Volume Title
Publisher
Abstract
Compute- and energy-efficient Time Series Classification (TSC) is the need of the hour-to cater the continually growing sources and applications of temporal data. State-of-the-Art (SoTA) temporal computational models, e.g., LSTMs/RNNs, HIVE-COTE, Transformers, etc., are high performing, but are also resource intensive, resulting in high energy consumption on CPUs/GPUs. On the contrary, Reservoir Computing (RC) based models are resource-efficient and perform well for simple TSC datasets; and when implemented with spiking neurons, spiking RC-based models offer the promise of high energy-efficiency on neuromorphic hardware. In this work, we analyse, enhance, and benchmark the newly introduced-spiking RC-based, “Legendre Spiking Neural Network” (Legendre-SNN or LSNN) model for TSC. We theoretically investigate the Legendre Delay Network (LDN) that acts as a reservoir in the LSNN model, and bring some useful insights into the design of the LDN-based models. In our analysis, we find that a higher order LDN is necessary for optimal performance with input signals composed of higher frequencies. We also extend the existing LSNN model to multivariate time-series signals and propose the “DeepLSNN” model. We conduct experiments with DeepLSNN on 102 benchmark TSC-datasets (comprising both univariate and multivariate signals). Via such large scale experiments, we present the first benchmark-results for spiking-TSC. Considering DeepLSNN's best results, we find that it outperforms the non-spiking LSTM-FCN on more than 31% of the 102 datasets. We note that our benchmark-results can serve as a comparison criterion for other spiking-TSC experiments.