Browsing by Author "Liu, Shiya"
Now showing 1 - 2 of 2
Results Per Page
Sort Options
- Energy-efficient Neuromorphic Computing for Resource-constrained Internet of Things DevicesLiu, Shiya (Virginia Tech, 2023-11-03)Due to the limited computation and storage resources of Internet of Things (IoT) devices, many emerging intelligent applications based on deep learning techniques heavily depend on cloud computing for computation and storage. However, cloud computing faces technical issues with long latency, poor reliability, and weak privacy, resulting in the need for on-device computation and storage. Also, on-device computation is essential for many time-critical applications, which require real-time data processing and energy-efficient. Furthermore, the escalating requirements for on-device processing are driven by network bandwidth limitations and consumer anticipations concerning data privacy and user experience. In the realm of computing, there is a growing interest in exploring novel technologies that can facilitate ongoing advancements in performance. Of the various prospective avenues, the field of neuromorphic computing has garnered significant recognition as a crucial means to achieve fast and energy-efficient machine intelligence applications for IoT devices. The programming of neuromorphic computing hardware typically involves the construction of a spiking neural network (SNN) capable of being deployed onto the designated neuromorphic hardware. This dissertation presents a range of methodologies aimed at enhancing the precision and energy efficiency of SNNs. To be more precise, these advancements are achieved by incorporating four essential methods. The first method is the quantization of neural networks through knowledge distillation. This work introduces a quantization technique that effectively reduces the computational and storage resource requirements of a model while minimizing the loss of accuracy. To further enhance the reduction of quantization errors, the second method introduces a novel quantization-aware training algorithm specifically designed for training quantized spiking neural network (SNN) models intended for execution on the Loihi chip, a specialized neuromorphic computing chip. SNNs generally exhibit lower accuracy performance compared to deep neural networks (DNNs). The third approach introduces a DNN-SNN co-learning algorithm, which enhances the performance of SNN models by leveraging knowledge obtained from DNN models. The design of the neural architecture plays a vital role in enhancing the accuracy and energy efficiency of an SNN model. The fourth method presents a novel neural architecture search algorithm specifically tailored for SNNs on the Loihi chip. The method selects an optimal architecture based on gradients induced by the architecture at initialization across different data samples without the need for training the architecture. To demonstrate the effectiveness and performance across diverse machine intelligence applications, our methods are evaluated through (i) image classification, (ii) spectrum sensing, and (iii) modulation symbol detection.
- Quantized Reservoir Computing for Spectrum Sensing with Knowledge DistillationLiu, Shiya; Liu, Lingjia; Yi, Yang (IEEE, 2021-12-28)Quantization has been widely used to compress machine learning models for deployments on field-programmable gate array (FPGA). However, quantization often degrades the accuracy of a model. In this work, we introduce a quantization approach to reduce the computation/storage resource consumption of a model without losing much accuracy. Spectrum sensing is a technique to identify the idle/busy bandwidths in cognitive radio. The spectrum occupancy of each bandwidth maintains a temporal correlation with previous and future time slots. A recurrent neural network (RNN) is very suitable for spectrum sensing. Reservoir computing (RC) is a computation framework derived from the theory of RNNs. It is a better choice than RNN for spectrum sensing on FPGA because it is easier to train and requires fewer computation resources. We apply our quantization approach to the RC to reduce the resource consumption on FPGA. A knowledge distillation called teacher-student mutual learning is proposed for the quantized RC to minimize quantization errors. The teacher-student mutual learning resolves the mismatched capacity issue of conventional knowledge distillation and enables knowledge distillation on small datasets. On the spectrum sensing dataset, the quantized RC trained with the teacher-student mutual learning achieves comparable accuracy and reduces the resource utilization of digital signal processing (DSP) blocks, flip-flop (FF), and Lookup table (LUT) by 53%, 40%, and 35%, respectively compared to the RNN. The inference speed of the quantized RC is 2.4 times faster. The teacher-student mutual learning improves the accuracy of the quantized RC by 2.39%, which is better than the conventional knowledge distillation.