Browsing by Author "Liu, Lingjia"
Now showing 1 - 20 of 32
Results Per Page
Sort Options
- 3D Massive MIMO and Artificial Intelligence for Next Generation Wireless NetworksShafin, Rubayet (Virginia Tech, 2020-04-13)3-dimensional (3D) massive multiple-input-multiple-output (MIMO)/full dimensional (FD) MIMO and application of artificial intelligence are two main driving forces for next generation wireless systems. This dissertation focuses on aspects of channel estimation and precoding for 3D massive MIMO systems and application of deep reinforcement learning (DRL) for MIMO broadcast beam synthesis. To be specific, downlink (DL) precoding and power allocation strategies are identified for a time-division-duplex (TDD) multi-cell multi-user massive FD-MIMO network. Utilizing channel reciprocity, DL channel state information (CSI) feedback is eliminated and the DL multi-user MIMO precoding is linked to the uplink (UL) direction of arrival (DoA) estimation through estimation of signal parameters via rotational invariance technique (ESPRIT). Assuming non-orthogonal/non-ideal spreading sequences of the UL pilots, the performance of the UL DoA estimation is analytically characterized and the characterized DoA estimation error is incorporated into the corresponding DL precoding and power allocation strategy. Simulation results verify the accuracy of our analytical characterization of the DoA estimation and demonstrate that the introduced multi-user MIMO precoding and power allocation strategy outperforms existing zero-forcing based massive MIMO strategies. In 3D massive MIMO systems, especially in TDD mode, a base station (BS) relies on the uplink sounding signals from mobile stations to obtain the spatial information for downlink MIMO processing. Accordingly, multi-dimensional parameter estimation of MIMO channel becomes crucial for such systems to realize the predicted capacity gains. In this work, we also study the joint estimation of elevation and azimuth angles as well as the delay parameters for 3D massive MIMO orthogonal frequency division multiplexing (OFDM) systems under a parametric channel modeling. We introduce a matrix-based joint parameter estimation method, and analytically characterize its performance for massive MIMO OFDM systems. Results show that antenna array configuration at the BS plays a critical role in determining the underlying channel estimation performance, and the characterized MSEs match well with the simulated ones. Also, the joint parametric channel estimation outperforms the MMSEbased channel estimation in terms of the correlation between the estimated channel and the real channel. Beamforming in MIMO systems is one of the key technologies for modern wireless communication. Creating wide common beams are essential for enhancing the coverage of cellular network and for improving the broadcast operation for control signals. However, in order to maximize the coverage, patterns for broadcast beams need to be adapted based on the users' movement over time. In this dissertation, we present a MIMO broadcast beam optimization framework using deep reinforcement learning. Our proposed solution can autonomously and dynamically adapt the MIMO broadcast beam parameters based on user' distribution in the network. Extensive simulation results show that the introduced algorithm can achieve the optimal coverage, and converge to the oracle solution for both single cell and multiple cell environment and for both periodic and Markov mobility patterns.
- Adaptive Beam Management for Secure mmWave CommunicationBaron-Hyppolite, Adrian Louis (Virginia Tech, 2024-04-09)Millimeter wave systems leverage beamforming to generate narrow, high-powered beams for overcoming the increased path loss in the millimeter wave spectrum. These beams are spa- tially confined, making millimeter wave links more resilient to eavesdropping and jamming attacks. However, the millimeter wave radios locate each other and establish communica- tion by exhaustively probing all possible angular directions, increasing their susceptibility to attacks. In this thesis, we showcase a secure beam management solution where we apply an adaptive beam management procedure that avoids probing the directions of potential attackers. We employ a reinforcement learning agent to control the probing and dynami- cally restrict sweeps to a subset of beams in the millimeter wave transmitter codebook to avoid the locations of potential attackers based on a proposed metric that quantifies the beam sweeping secrecy over a pre-defined area. We evaluate our proposed system through numerical simulations and an experimental real-life implementation on the CCI xG Testbed.
- Big Data Meet Cyber-Physical Systems: A Panoramic SurveyAtat, Rachad; Liu, Lingjia; Wu, Jinsong; Li, Guangyu; Ye, Chunxuan; Yi, Yang (IEEE, 2018)The world is witnessing an unprecedented growth of cyber-physical systems (CPS), which are foreseen to revolutionize our world via creating new services and applications in a variety of sectors, such as environmental monitoring, mobile-health systems, intelligent transportation systems, and so on. The information and communication technology sector is experiencing a significant growth in data traffic, driven by the widespread usage of smartphones, tablets, and video streaming, along with the significant growth of sensors deployments that are anticipated in the near future. It is expected to outstandingly increase the growth rate of raw sensed data. In this paper, we present the CPS taxonomy via providing a broad overview of data collection, storage, access, processing, and analysis. Compared with other survey papers, this is the first panoramic survey on big data for CPS, where our objective is to provide a panoramic summary of different CPS aspects. Furthermore, CPS requires cybersecurity to protect them against malicious attacks and unauthorized intrusion, which become a challenge with the enormous amount of data that are continuously being generated in the network. Thus, we also provide an overview of the different security solutions proposed for CPS big data storage, access, and analytics. We also discuss big data meeting green challenges in the contexts of CPS.
- A Comprehensive Analysis of Deep Learning for Interference Suppression, Sample and Model Complexity in Wireless SystemsOyedare, Taiwo Remilekun (Virginia Tech, 2024-03-12)The wireless spectrum is limited and the demand for its use is increasing due to technological advancements in wireless communication, resulting in persistent interference issues. Despite progress in addressing interference, it remains a challenge for effective spectrum usage, particularly in the use of license-free and managed shared bands and other opportunistic spectrum access solutions. Therefore, efficient and interference-resistant spectrum usage schemes are critical. In the past, most interference solutions have relied on avoidance techniques and expert system-based mitigation approaches. Recently, researchers have utilized artificial intelligence/machine learning techniques at the physical (PHY) layer, particularly deep learning, which suppress or compensate for the interfering signal rather than simply avoiding it. In addition, deep learning has been utilized by researchers in recent years to address various difficult problems in wireless communications such as, transmitter classification, interference classification and modulation recognition, amongst others. To this end, this dissertation presents a thorough analysis of deep learning techniques for interference classification and suppression, and it thoroughly examines complexity (sample and model) issues that arise from using deep learning. First, we address the knowledge gap in the literature with respect to the state-of-the-art in deep learning-based interference suppression. To account for the limitations of deep learning-based interference suppression techniques, we discuss several challenges, including lack of interpretability, the stochastic nature of the wireless channel, issues with open set recognition (OSR) and challenges with implementation. We also provide a technical discussion of the prominent deep learning algorithms proposed in the literature and also offer guidelines for their successful implementation. Next, we investigate convolutional neural network (CNN) architectures for interference and transmitter classification tasks. In particular, we utilize a CNN architecture to classify interference, investigate model complexity of CNN architectures for classifying homogeneous and heterogeneous devices and then examine their impact on test accuracy. Next, we explore the issues with sample size and sample quality with regards to the training data in deep learning. In doing this, we also propose a rule-of-thumb for transmitter classification using CNN based on the findings from our sample complexity study. Finally, in cases where interference cannot be avoided, it is important to suppress such interference. To achieve this, we build upon autoencoder work from other fields to design a convolutional neural network (CNN)-based autoencoder model to suppress interference thereby ensuring coexistence of different wireless technologies in both licensed and unlicensed bands.
- Deep Recurrent Q Networks for Dynamic Spectrum Access in Dynamic Heterogeneous Envirnments with Partial ObservationsXu, Yue (Virginia Tech, 2022-09-23)Dynamic Spectrum Access (DSA) has strong potential to address the need for improved spectrum efficiency. Unfortunately, traditional DSA approaches such as simple "sense-and-avoid" fail to provide sufficient performance in many scenarios. Thus, the combination of sensing with deep reinforcement learning (DRL) has been shown to be a promising alternative to previously proposed simplistic approaches. DRL does not require the explicit estimation of transition probability matrices and prohibitively large matrix computations as compared to traditional reinforcement learning methods. Further, since many learning approaches cannot solve the resulting online Partially-Observable Markov Decision Process (POMDP), Deep Recurrent Q-Networks (DRQN) have been proposed to determine the optimal channel access policy via online learning. The fundamental goal of this dissertation is to develop DRL-based solutions to address this POMDP-DSA problem. We mainly consider three aspects in this work: (1) optimal transmission strategies, (2) combined intelligent sensing and transmission strategies, and (c) learning efficiency or online convergence speed. Four key challenges in this problem are (1) the proposed DRQN-based node does not know the other nodes' behavior patterns a priori and must to predict the future channel state based on previous observations; (2) the impact to primary user throughput during learning and even after learning must be limited; (3) resources can be wasted the sensing/observation; and (4) convergence speed must be improved without impacting performance performance. We demonstrate in this dissertation, that the proposed DRQN can learn: (1) the optimal transmission strategy in a variety of environments under partial observations; (2) a sensing strategy that provides near-optimal throughput in different environments while dramatically reducing the needed sensing resources; (3) robustness to imperfect observations; (4) a sufficiently flexible approach that can accommodate dynamic environments, multi-channel transmission and the presence of multiple agents; (5) in an accelerated fashion utilizing one of three different approaches.
- Deep Reinforcement Learning for Next Generation Wireless Networks with Echo State NetworksChang, Hao-Hsuan (Virginia Tech, 2021-08-26)This dissertation considers a deep reinforcement learning (DRL) setting under the practical challenges of real-world wireless communication systems. The non-stationary and partially observable wireless environments make the learning and the convergence of the DRL agent challenging. One way to facilitate learning in partially observable environments is to combine recurrent neural network (RNN) and DRL to capture temporal information inherent in the system, which is referred to as deep recurrent Q-network (DRQN). However, training DRQN is known to be challenging requiring a large amount of training data to achieve convergence. In many targeted wireless applications in the 5G and future 6G wireless networks, the available training data is very limited. Therefore, it is important to develop DRL strategies that are capable of capturing the temporal correlation of the dynamic environment that only requires limited training overhead. In this dissertation, we design efficient DRL frameworks by utilizing echo state network (ESN), which is a special type of RNNs where only the output weights are trained. To be specific, we first introduce the deep echo state Q-network (DEQN) by adopting ESN as the kernel of deep Q-networks. Next, we introduce federated ESN-based policy gradient (Fed-EPG) approach that enables multiple agents collaboratively learn a shared policy to achieve the system goal. We designed computationally efficient training algorithms by utilizing the special structure of ESNs, which have the advantage of learning a good policy in a short time with few training data. Theoretical analyses are conducted for DEQN and Fed-EPG approaches to show the convergence properties and to provide a guide to hyperparameter tuning. Furthermore, we evaluate the performance under the dynamic spectrum sharing (DSS) scenario, which is a key enabling technology that aims to utilize the precious spectrum resources more efficiently. Compared to a conventional spectrum management policy that usually grants a fixed spectrum band to a single system for exclusive access, DSS allows the secondary system to dynamically share the spectrum with the primary system. Our work sheds light on the real deployments of DRL techniques in next generation wireless systems.
- Design of Secure Scalable Frameworks for Next Generation Cellular NetworksAtalay, Tolga Omer (Virginia Tech, 2024-06-06)Leveraging Network Functions Virtualization (NFV), the Fifth Generation (5G) core, and Radio Access Network (RAN) functions are implemented as Virtual Network Functions (VNFs) on Commercial-off-the-Shelf (COTS) hardware. The use of virtualized micro-services to implement these 5G VNFs enables the flexible and scalable construction of end-to-end logically isolated network fragments denoted as network slices. The goal of this dissertation is to design more scalable, flexible, secure, and visible 5G networks. Thus, each chapter will present a design and evaluation that addresses one or more of these aspects. The first objective is to understand the limits of 5G core micro-service virtualization when using lightweight containers for constructing various network slicing models with different service guarantees. The initial deployment model consists of the OpenAirInterface (OAI) 5G core in a containerized setting to create a universally deployable testbed. Operational and computational stress tests are performed on individual 5G core VNFs where different network slicing models are created that are applicable to real-life scenarios. The analysis captures the increase in compute resource consumption of individual VNFs during various core network procedures. Furthermore, using different network slicing models, the progressive increase in resource consumption can be seen as the service guarantees of the slices become more demanding. The framework created using this testbed is the first to provide such analytics on lightweight virtualized 5G core VNFs with large-scale end-to-end connections. Moving into the cloud-native ecosystem, 5G core deployments will be orchestrated by middle-men Network-slice-as-a-Service (NSaaS) providers. These NSaaS providers will consume Infrastructure-as-a-service (IaaS) offerings and offer network slices to Mobile Virtual Network Operators (MVNOs). To investigate this future model, end-to-end emulated 5G deployments are conducted to offer insight into the cost implications surrounding such NSaaS offerings in the cloud. The deployment features real-life traffic patterns corresponding to practical use cases which are matched with specific network slicing models. These models are implemented in a 5G testbed to gather compute resource consumption metrics. The obtained data are used to formulate infrastructure procurement costs for popular cloud providers such as Amazon Web Services (AWS), Google Cloud Platform (GCP), and Microsoft Azure. The results show steady patterns in compute consumption across multiple use cases, which are used to make high-scale cost projections for public cloud deployments. In the end, the trade-off between cost and throughput is achieved by decentralizing the network slices and offloading the user plane. The next step is the demystification of 5G traffic patterns using the Over-the-Air (OTA) testbed. An open-source OTA testbed is constructed leveraging advanced features of 5G radio access and core networks developed by OAI. The achievable Quality of Service (QoS) is evaluated to provide visibility into the compute consumption of individual components. Additionally, a method is presented to utilize WiFi devices for experimenting with 5G QoS. Resource consumption analytics are collected from the 5G user plane in correlation to raw traffic patterns. The results show that the open-source 5G testbed can sustain sub-20ms latency with up to 80Mbps throughput over a 25m range using COTS devices. Device connection remains stable while supporting different use cases such as AR/VR, online gaming, video streaming, and Voice-over IP (VoIP). It illustrates how these popular use cases affect CPU utilization in the user plane. This provides insight into the capabilities of existing 5G solutions by demystifying the resource needs of specific use cases. Moving into public cloud-based deployments, creates a growing demand for general-purpose compute resources as 5G deployments continue to expand. Given their existing infrastructures, cloud providers such as AWS are attractive platforms to address this need. Therefore, it is crucial to understand the control and user plane QoS implications associated with deploying the 5G core on top of AWS. To this end, a 5G testbed is constructed using open-source components spanning multiple global locations within the AWS infrastructure. Using different core deployment strategies by shuffling VNFs into AWS edge zones, an operational breakdown of the latency overhead is conducted for 5G procedures. The results show that moving specific VNFs into edge regions reduces the latency overhead for key 5G operations. Multiple user plane connections are instantiated between availability zones and edge regions with different traffic loads. As more data sessions are instantiated, it is observed that the deterioration of connection quality varies depending on traffic load. Ultimately, the findings provide new insights for MVNOs to determine favorable placements of their 5G core entities in the cloud. The transition into cloud-native deployments has encouraged the development of supportive platforms for 5G. One such framework is the OpenRAN initiative, led by the O-RAN Alliance. The OpenRAN initiative promotes an open Radio Access Network (RAN) and offers operators fine-grained control over the radio stack. To that end, O-RAN introduces new components to the 5G ecosystem, such as the near real-time RAN Intelligent Controller (near-RT RIC) and the accompanying Extensible Applications (xApps). The introduction of these entities expands the 5G threat surface. Furthermore, with the movement from proprietary hardware to virtual environments enabled by NFV, attack vectors that exploit the existing NFV attack surface pose additional threats. To deal with these threats, the textbf{xApp repository function (XRF)} framework is constructed for scalable authentication, authorization, and discovery of xApps. In order to harden the XRF microservices, deployments are isolated using Intel Software Guard Extensions (SGX). The XRF modules are individually benchmarked to compare how different microservices behave in terms of computational overhead when deployed in virtual and hardware-based isolation sandboxes. The evaluation shows that the XRF framework scales efficiently in a multi-threaded Kubernetes environment. Isolation of the XRF microservices introduces different amounts of processing overhead depending on the sandboxing strategy. A security analysis is conducted to show how the XRF framework addresses chosen key issues from the O-RAN and 5G standardization efforts. In the final chapter of the dissertation, the focus shifts towards the development and evaluation of 5G-STREAM, a service mesh tailored for rapid, efficient, and authorized microservices in cloud-based 5G core networks. 5G-STREAM addresses critical scalability and efficiency challenges in the 5G core control plane by optimizing traffic and reducing signaling congestion across distributed cloud environments. The framework enhances Virtual Network Function (VNF) service chains' topology awareness, enabling dynamic configuration of communication pathways which significantly reduces discovery and authorization signaling overhead. A prototype of 5G-STREAM was developed and tested, showing a reduction of up to 2× in inter-VNF latency per HTTP transaction in the core network service chains, particularly benefiting larger service chains with extensive messaging. Additionally, 5G-STREAM's deployment strategies for VNF placement are explored to further optimize performance and cost efficiency in cloud-based infrastructures, ultimately providing a scalable solution that can adapt to increasing network demands while maintaining robust service levels. This innovative approach signifies a pivotal advancement in managing 5G core networks, paving the way for more dynamic, efficient, and cost-effective cellular network infrastructures. Overall, this dissertation is devoted to designing, building, and evaluating scalable and secure 5G deployments.
- Differential Privacy Meets Federated Learning under Communication ConstraintsMohammadi, Nima; Bai, Jianan; Fan, Qiang; Song, Yifei; Yi, Yang; Liu, Lingjia (IEEE, 2021)The performance of federated learning systems is bottlenecked by communication costs and training variance. The communication overhead problem is usually addressed by three communication-reduction techniques, namely, model compression, partial device participation, and periodic aggregation, at the cost of increased training variance. Different from traditional distributed learning systems, federated learning suffers from data heterogeneity (since the devices sample their data from possibly different distributions), which induces additional variance among devices during training. Various variance-reduced training algorithms have been introduced to combat the effects of data heterogeneity, while they usually cost additional communication resources to deliver necessary control information. Additionally, data privacy remains a critical issue in FL, and thus there have been attempts at bringing Differential Privacy to this framework as a mediator between utility and privacy requirements. This paper investigates the trade-offs between communication costs and training variance under a resource-constrained federated system theoretically and experimentally, and studies how communication reduction techniques interplay in a differentially private setting. The results provide important insights into designing practical privacy-aware federated learning systems.
- Energy Efficient Deep Spiking Recurrent Neural Networks: A Reservoir Computing-Based ApproachHamedani, Kian (Virginia Tech, 2020-06-18)Recurrent neural networks (RNNs) have been widely used for supervised pattern recognition and exploring the underlying spatio-temporal correlation. However, due to the vanishing/exploding gradient problem, training a fully connected RNN in many cases is very difficult or even impossible. The difficulties of training traditional RNNs, led us to reservoir computing (RC) which recently attracted a lot of attention due to its simple training methods and fixed weights at its recurrent layer. There are three different categories of RC systems, namely, echo state networks (ESNs), liquid state machines (LSMs), and delayed feedback reservoirs (DFRs). In this dissertation a novel structure of RNNs which is inspired by dynamic delayed feedback loops is introduced. In the reservoir (recurrent) layer of DFR, only one neuron is required which makes DFRs extremely suitable for hardware implementations. The main motivation of this dissertation is to introduce an energy efficient, and easy to train RNN while this model achieves high performances in different tasks compared to the state-of-the-art. To improve the energy efficiency of our model, we propose to adopt spiking neurons as the information processing unit of DFR. Spiking neural networks (SNNs) are the most biologically plausible and energy efficient class of artificial neural networks (ANNs). The traditional analog ANNs have marginal similarity with the brain-like information processing. It is clear that the biological neurons communicate together through spikes. Therefore, artificial SNNs have been introduced to mimic the biological neurons. On the other hand, the hardware implementation of SNNs have shown to be extremely energy efficient. Towards achieving this overarching goal, this dissertation presents a spiking DFR (SDFR) with novel encoding schemes, and defense mechanisms against adversarial attacks. To verify the effectiveness and performance of the SDFR, it is adopted in three different applications where there exists a significant Spatio-temporal correlations. These three applications are attack detection in smart grids, spectrum sensing of multi-input-multi-output(MIMO)-orthogonal frequency division multiplexing (OFDM) Dynamic Spectrum Sharing (DSS) systems, and video-based face recognition. In this dissertation, the performance of SDFR is first verified in cyber attack detection in Smart grids. Smart grids are a new generation of power grids which guarantee a more reliable and efficient transmission and delivery of power to the costumers. A more reliable and efficient power generation and distribution can be realized through the integration of internet, telecommunication, and energy technologies. The convergence of different technologies, brings up opportunities, but the challenges are also inevitable. One of the major challenges that pose threat to the smart grids is cyber-attacks. A novel method is developed to detect false data injection (FDI) attacks in smart grids. The second novel application of SDFR is the spectrum sensing of MIMO-OFDM DSS systems. DSS is being implemented in the fifth generation of wireless communication systems (5G) to improve the spectrum efficiency. In a MIMO-OFDM system, not all the subcarriers are utilized simultaneously by the primary user (PU). Therefore, it is essential to sense the idle frequency bands and assign them to the secondary user (SU). The effectiveness of SDFR in capturing the spatio-temporal correlation of MIMO-OFDM time-series and predicting the availability of frequency bands in the future time slots is studied as well. In the third application, the SDFR is modified to be adopted in video-based face recognition. In this task, the SDFR is leveraged to recognize the identities of different subjects while they rotate their heads in different angles. Another contribution of this dissertation is to propose a novel encoding scheme of spiking neurons which is inspired by the cognitive studies of rats. For the first time, the multiplexing of multiple neural codes is introduced and it is shown that the robustness and resilience of the spiking neurons is increased against noisy data, and adversarial attacks, respectively. Adversarial attacks are small and imperceptible perturbations of the input data, which have shown to be able to fool deep learning (DL) models. So far, many adversarial attack and defense mechanisms have been introduced for DL models. Compromising the security and reliability of artificial intelligence (AI) systems is a major concern of government, industry and cyber-security researchers, in that insufficient protections can compromise the security and privacy of everyone in society. Finally, a defense mechanism to protect spiking neurons against adversarial attacks is introduced for the first time. In a nutshell, this dissertation presents a novel energy efficient deep spiking recurrent neural network which is inspired by delayed dynamic loops. The effectiveness of the introduced model is verified in several different applications. At the end, novel encoding and defense mechanisms are introduced which improve the robustness of the model against noise and adversarial attacks.
- Energy-efficient Neuromorphic Computing for Resource-constrained Internet of Things DevicesLiu, Shiya (Virginia Tech, 2023-11-03)Due to the limited computation and storage resources of Internet of Things (IoT) devices, many emerging intelligent applications based on deep learning techniques heavily depend on cloud computing for computation and storage. However, cloud computing faces technical issues with long latency, poor reliability, and weak privacy, resulting in the need for on-device computation and storage. Also, on-device computation is essential for many time-critical applications, which require real-time data processing and energy-efficient. Furthermore, the escalating requirements for on-device processing are driven by network bandwidth limitations and consumer anticipations concerning data privacy and user experience. In the realm of computing, there is a growing interest in exploring novel technologies that can facilitate ongoing advancements in performance. Of the various prospective avenues, the field of neuromorphic computing has garnered significant recognition as a crucial means to achieve fast and energy-efficient machine intelligence applications for IoT devices. The programming of neuromorphic computing hardware typically involves the construction of a spiking neural network (SNN) capable of being deployed onto the designated neuromorphic hardware. This dissertation presents a range of methodologies aimed at enhancing the precision and energy efficiency of SNNs. To be more precise, these advancements are achieved by incorporating four essential methods. The first method is the quantization of neural networks through knowledge distillation. This work introduces a quantization technique that effectively reduces the computational and storage resource requirements of a model while minimizing the loss of accuracy. To further enhance the reduction of quantization errors, the second method introduces a novel quantization-aware training algorithm specifically designed for training quantized spiking neural network (SNN) models intended for execution on the Loihi chip, a specialized neuromorphic computing chip. SNNs generally exhibit lower accuracy performance compared to deep neural networks (DNNs). The third approach introduces a DNN-SNN co-learning algorithm, which enhances the performance of SNN models by leveraging knowledge obtained from DNN models. The design of the neural architecture plays a vital role in enhancing the accuracy and energy efficiency of an SNN model. The fourth method presents a novel neural architecture search algorithm specifically tailored for SNNs on the Loihi chip. The method selects an optimal architecture based on gradients induced by the architecture at initialization across different data samples without the need for training the architecture. To demonstrate the effectiveness and performance across diverse machine intelligence applications, our methods are evaluated through (i) image classification, (ii) spectrum sensing, and (iii) modulation symbol detection.
- IEEE Access Special Section Editorial: Recent Advances In Full-Duplex Radios And NetworksHuang, Chuan; Liu, Lingjia; Xia, Bin; Joung, Jingon; Ho, Chin Keong (IEEE, 2018)
- An Investigation of Methods to Improve Area and Performance of Hardware Implementations of a Lattice Based CryptosystemBeckwith, Luke Parkhurst (Virginia Tech, 2020-11-05)With continuing research into quantum computing, current public key cryptographic algorithms such as RSA and ECC will become insecure. These algorithms are based on the difficulty of integer factorization or discrete logarithm problems, which are difficult to solve on classical computers but become easy with quantum computers. Because of this threat, government and industry are investigating new public key standards, based on mathematical assumptions that remain secure under quantum computing. This paper investigates methods of improving the area and performance of one of the proposed algorithms for key exchanges, "NewHope." We describe a pipelined FPGA implementation of NewHope512cpa which dramatically increases the throughput for a similar design area. Our pipelined encryption implementation achieves 652.2 Mbps and a 0.088 Mbps/LUT throughput-to-area (TPA) ratio, which are the best known results to date, and achieves an energy efficiency of 0.94 nJ/bit. This represents TPA and energy efficiency improvements of 10.05× and 8.58×, respectively, over a non-pipelined approach. Additionally, we investigate replacing the large SHAKE XOF (hash) function with a lightweight Trivium based PRNG, which reduces the area by 32% and improves energy efficiency by 30% for the pipelined encryption implementation, and which could be considered for future cipher specifications.
- Joint Security and QoS Provisioning in Train-Centric CBTC Systems Under Sybil AttacksWang, Xiaoxuan; Liu, Lingjia; Zhu, Li; Tang, Tao (IEEE, 2019)The security and Quality-of-Service (QoS) provisioning are two critical themes in urban rail communication-based train control (CBTC) data communication systems, which can directly affect the train's safe operation. In this paper, we design the novel train-centric CBTC systems using train-to-train (T2T) wireless communication with the innovative security check scheme. The local security certification and cooperative security check are proposed to detect and defense the Sybil attack based on the CBTC T2T communications. The quantized Age of Information (AoI) is used as an integrated QoS and security indicator of the train-centric CBTC data communication systems. The proposed AoI indicator fully considers the impact of the packet drop and re-transmission, Sybil attack, and the cooperative security check on CBTC systems. The policy-based asynchronous reinforcement learning is utilized to improve the integrated AoI performance. The simulation results show that the proposed cooperative security check scheme with the optimization model can achieve improved integrated AoI performance, compared with the traditional security check scheme. Moreover, with the help of the cooperative security check scheme, we detect and defense the Sybil attack against the train-centric CBTC systems with much higher probability.
- Machine Learning-Based Receiver in Multiple Input Multiple Output Communications SystemsZhou, Zhou (Virginia Tech, 2021-08-10)Bridging machine learning technologies to multiple-input-multiple-output (MIMO) communications systems is a primary driving force for next-generation wireless systems. This dissertation introduces a variety of neural network structures for symbol detection/equalization tasks in MIMO systems configured with two different waveforms, orthogonal frequency-division multiplexing (OFDM) and orthogonal time frequency and space (OTFS). The former one is the major air interface in current cellular systems. The latter one is developed to handle high mobility. For the sake of real-time processing, the introduced neural network structures are incorporated with inductive biases of wireless communications signals and operate in an online training manner. The utilized inductive priors include the shifting invariant property of quadrature amplitude modulation, the time-frequency relation inherent in OFDM signals, the multi-mode feature of massive antennas, and the delay-Doppler representation of doubly selective channel. In addition, the neural network structures are rooted in reservoir computing - an efficient neural network computational framework with decent generalization performance for limited training datasets. Therefore, the resulting neural network structures can learn beyond observation and offer decent transmission reliability in the low signal-to-noise ratio (SNR) regime. This dissertation includes comprehensive simulation results to justify the effectiveness of the introduced NN architectures compared with conventional model-based approaches and alternative neural network structures.
- MIMO-OFDM Symbol Detection via Echo State NetworksZhou, Zhou (Virginia Tech, 2019-10-30)Echo state network (ESN) is a specific neural network structure composed of high dimensional nonlinear dynamics and learned readout weights. This thesis considers applying ESN for symbol detection in multiple-input, multiple-output orthogonal frequency-division multiplexing (MIMO-OFDM) systems. A new ESN structure, namely, windowed echo state networks (WESN) is introduced to further improve the symbol detection performance. Theoretical analysis justifies WESN has an enhanced short-term memory (STM) compared with the standard ESN such that WESN can offer better computing ability. Additionally, the bandwidth spent as the training set is the same as the demodulation reference signals defined in 3GPP LTE/LTE-Advanced systems for the ESN/WESN based symbol detection. Meanwhile, a unified training framework is developed for both comb and scattered pilot patterns. Complexity analysis demonstrates the advantages of ESN/WESN based symbol detector compared to conventional symbol detectors such as linear minimum mean square error (LMMSE) and sphere decoder when the system is employed with a large number of OFDM sub-carriers. Numerical evaluations show that ESN/WESN has an improvement of symbol detection performance as opposed to conventional methods in both low SNR regime and power amplifier (PA) nonlinear regime. Finally, it demonstrates that WESN can generate a better symbol detection result over ESN.
- Modeling, Analysis, and Real-Time Design of Many-Antenna MIMO NetworksChen, Yongce (Virginia Tech, 2021-09-14)Among the many advances and innovations in wireless technologies over the past twenty years, MIMO is perhaps among the most successful. MIMO technology has been evolving over the past two decades. Today, the number of antennas equipped at a base station (BS) or an access point (AP) is increasing, which forms what we call ``many-antenna'' MIMO systems. Many-antenna MIMO will have significant impacts on modern wireless communications, as it will allow numerous wireless applications to operate on the vastly underexplored mid-band and high-band spectrum and is able to deliver ultra-high throughput. Although there are considerable efforts on many-antenna MIMO systems, most of them came from physical (PHY) layer information-theoretic exploitation. There is a lack of investigation of many-antenna MIMO from a networking perspective. On the other hand, new knowledge and understanding begin to emerge at the PHY layer, such as the rank-deficient channel phenomenon. This calls for new theories and models for many-antenna MIMO in a networking environment. In addition, the problem space for many-antenna MIMO systems is much broader and more challenging than conventional MIMO. Reusing existing solutions designed for conventional MIMO systems may suffer from inferior performance or require excessive computation time. The goal of this dissertation is to advance many-antenna MIMO techniques for networking research. We focus on the following two critical areas in the context of many-antenna MIMO networks: (i) DoF-based modeling and (ii) real-time optimization. This dissertation consists of two parts that study these two areas. In the first part, we aim to develop new DoF models and theories under general channel rank conditions for many-antenna MIMO networks, and we explored efficient DoF allocation based on our new DoF model. The main contributions of this part are summarized as follows. New DoF models and theories under general channel rank conditions: Existing DoF-based models in networking community assume that the channel matrix is of full rank. However, this assumption no longer holds when the number of antennas becomes many and the propagation environment is not ideal. In this study, we develop a novel DoF model under general channel rank conditions. In particular, we find that for IC, shared DoF consumption at both transmit and receive nodes is most efficient for DoF allocation, which is contrary to existing unilateral IC models based on full-rank channel assumption. Further, we show that existing DoF models under the full-rank assumption are a special case of our generalized DoF model. The findings of this study pave the way for future research of many-antenna networks under general channel rank conditions. Efficient DoF utilization for MIMO networks: We observes that, in addition to the fact that channel is not full-rank, the strength of signals on different directions in the eigenspace is extremely uneven. This offers us new opportunities to efficiently utilize DoFs in a MIMO network. In this study, we introduce a novel concept called ``effective rank threshold''. Based on this threshold, DoFs are consumed only to cancel strong interferences in the eigenspace while weak interferences are treated as noise in throughput calculation. To better understand the benefits of this approach, we study a fundamental trade-off between network throughput and effective rank threshold for an MU-MIMO network. Our simulation results show that network throughput under optimal rank threshold is significantly higher than that under existing DoF IC models. In the second part, we offered real-time designs and implementations to solve many-antenna MIMO problems for 5G cellular systems. In addition to maximizing a specific optimization objective, we aim at offering a solution that can be implemented in sub-ms to meet requirements in 5G standards. The main contributions of this part are summarized as follows. Turbo-HB---A novel design and implementation for ultra-fast hybrid beamforming: We investigate the beamforming problem under hybrid beamforming (HB) architecture. A major practical challenge for HB is to obtain a solution in 500 $mu$s, which is an extremely stringent but necessary time requirement for its deployment in the field. To address this challenge, we present Turbo-HB---a novel beamforming design under the HB architecture that can obtain the beamforming matrices in about 500 $mu$s. The key ideas of Turbo-HB are two-fold. First, we develop low-complexity SVD by exploiting randomized SVD technique and leveraging channel sparsity at mmWave frequencies. Second, we accelerate the overall computation time through large-scale parallel computation on a commercial off-the-shelf (COTS) GPU platform, with special engineering efforts for matrix operations and minimized memory access. Experimental results show that Turbo-HB is able to obtain the beamforming matrices in 500 $mu$s for an MU-MIMO cellular system while achieving similar or better throughput performance by those state-of-the-art algorithms. mCore+---A sub-millisecond scheduler for 5G MU-MIMO systems: We study a scheduling problem in a 5G NR environment. In 5G NR, an MU-MIMO scheduler needs to allocate RBs and assign MCS for each user at each TTI. In particular, multiple users may be co-scheduled on the same RB under MU-MIMO. In addition, the real-time requirement for determining a scheduling solution is at most 1 ms. In this study, we present a novel scheduler mCore+ that can meet the sub-ms real-time requirement. mCore+ is designed through a multi-phase optimization, leveraging large-scale parallelism. In each phase, mCore+ either decomposes the optimization problem into a large number of independent sub-problems, or reduces the search space into a smaller but more promising subspace, or both. We implement mCore+ on a COTS GPU platform. Experimental results show that mCore+ can obtain a scheduling solution in $sim$500 $mu$s. Moreover, mCore+ can achieve better throughput performance than state-of-the-art algorithms. M3---A sub-millisecond scheduler for multi-cell MIMO networks under C-RAN architecture: We investigate a scheduling problem for a multi-cell environment. Under Cloud Radio Access Network (C-RAN) architecture, the signal processing can be performed cooperatively for multiple cells at a centralized baseband unit (BBU) pool. However, a new resource scheduler is needed to jointly determine RB allocation, MCS assignment, and beamforming matrices for all users under multiple cells. In addition, we aim at finding a scheduling solution within each TTI (i.e., at most 1 ms) to conform to the frame structure defined by 5G NR. To do this, we propose M3---a GPU-based real-time scheduler for a multi-cell MIMO system. M3 is developed through a novel multi-pipeline design that exploits large-scale parallelism. Under this design, one pipeline performs a sequence of operations for cell-edge users to explore joint transmission, and in parallel, the other pipeline is for cell-center users to explore MU-MIMO transmission. For validation, we implement M3 on a COTS GPU. We showed that M3 can find a scheduling solution within 1 ms for all tested cases, while it can significantly increase user throughput by leveraging joint transmission among neighboring cells.
- Online Machine Learning for Wireless Communications: Channel Estimation, Receive Processing, and Resource AllocationLi, Lianjun (Virginia Tech, 2023-07-03)Machine learning (ML) has shown its success in many areas such as computer vision, natural language processing, robot control, and gaming. ML also draws significant attention in the wireless communication society. However, applying ML schemes to wireless communication networks is not straightforward, there are several challenges need to addressed: 1). Training data in communication networks, especially in physical and MAC layer, are extremely limited; 2). The high-dynamic wireless environment and fast changing transmission schemes in communication networks make offline training impractical; 3). ML tools are treated as black boxes, which lack of explainability. This dissertation tries to address those challenges by selecting training-efficient neural networks, devising online training frameworks for wireless communication scenarios, and incorporating communication domain knowledge into the algorithm design. Training-efficient ML algorithms are customized for three communication applications: 1). Symbol detection, where real-time online learning-based symbol detection algorithms are designed for MIMO-OFDM and massive MIMO-OFDM systems by utilizing reservoir computing, extreme learning machine, multi-mode reservoir computing, and StructNet; 2) Channel estimation, where residual learning-based offline method is introduced for WiFi-OFDM systems, and a StructNet-based online method is devised for MIMO-OFDM systems; 3) Radio resource management, where reinforcement learning-based schemes are designed for dynamic spectrum access, as well as ORAN intelligent network slicing management. All algorithms introduced in this dissertation have demonstrated outstanding performance in their application scenarios, which paves the path for adopting ML-based solutions in practical wireless networks.
- Powering Next-Generation Artificial Intelligence by Designing Three-dimensional High-Performance Neuromorphic Computing System with MemristorsAn, Hongyu (Virginia Tech, 2020-09-17)Human brains can complete numerous intelligent tasks, such as pattern recognition, reasoning, control and movement, with remarkable energy efficiency (20 W). In contrast, a typical computer only recognizes 1,000 different objects but consumes about 250 W power [1]. This performance significant differences stem from the intrinsic different structures of human brains and digital computers. The latest discoveries in neuroscience indicate the capabilities of human brains are attributed to three unique features: (1) neural network structure; (2) spike-based signal representation; (3) synaptic plasticity and associative memory learning [1, 2]. In this dissertation, the next-generation platform of artificial intelligence is explored by utilizing memristors to design a three-dimensional high-performance neuromorphic computing system. The low-variation memristors (fabricated by Virginia Tech) reduce the learning accuracy of the system significantly through adding heat dissipation layers. Moreover, three emerging neuromorphic architectures are proposed showing a path to realizing the next-generation platform of artificial intelligence with self-learning capability and high energy efficiency. At last, an Associative Memory Learning System is exhibited to reproduce an associative memory learning that remembers and correlates two concurrent events (pronunciation and shape of digits) together.
- Primary User Obfuscation in an Incumbent Informed Spectrum Access SystemMakin, Cameron (Virginia Tech, 2021-06-24)With a growing demand for spectrum availability, spectrum sharing has become a high-profile solution to overcrowding. In order to enable spectrum sharing between incumbent/primary and secondary users, incumbents must have spectrum protection and privacy from malicious new entrants. In this Spectrum Access System (SAS) advancement, Primary Users (PUs) are obfuscated with the efforts of the SAS and the cooperation of obedient new entrants. Further, the necessary changes to the SAS to support this privacy scheme are exposed to suggest improvements in PU privacy, Citizens Broadband Radio Service Device (CBSD)-SAS relations, and punishment for unauthorized transmission. Results show the feasibility for PU obfuscation with respect to malicious spectrum sensing users. Simulation results indicate that the obfuscation scheme can deliver location and frequency occupation privacy with 75% and 66% effectiveness respectively in a 100% efficient spectrum utilization oriented obfuscation scheme. A scheme without spectrum utilization constraint shows up to 91% location privacy effectiveness. Experiment trials indicate that the privacy tactic can be implemented on an open-source SAS, however environmental factors may degrade the tactic's performance.
- Privacy-aware Federated Learning with Global Differential PrivacyAirody Suresh, Spoorthi (Virginia Tech, 2023-01-31)There is an increasing need for low-power neural systems as neural networks become more widely used in embedded devices with limited resources. Spiking neural networks (SNNs) are proving to be a more energy-efficient option to conventional Artificial neural networks (ANNs), which are recognized for being computationally heavy. Despite its significance, there has been not enough attention on training SNNs on large-scale distributed Machine Learning techniques like Federated Learning (FL). As federated learning involves many energy-constrained devices, there is a significant opportunity to take advantage of the energy efficiency offered by SNNs. However, it is necessary to address the real-world communication constraints in an FL system and this is addressed with the help of three communication reduction techniques, namely, model compression, partial device participation, and periodic aggregation. Furthermore, the convergence of federated learning systems is also affected by data heterogeneity. Federated learning systems are capable of protecting the private data of clients from adversaries. However, by analyzing the uploaded client parameters, confidential information can still be revealed. To combat privacy attacks on the FL systems, various attempts have been made to incorporate differential privacy within the framework. In this thesis, we investigate the trade-offs between communication costs and training variance under a Federated Learning system with Differential Privacy applied at the parameter server (curator model).