Browsing by Author "Dhillon, Harpreet Singh"
Now showing 1 - 20 of 73
Results Per Page
Sort Options
- 3D Massive MIMO and Artificial Intelligence for Next Generation Wireless NetworksShafin, Rubayet (Virginia Tech, 2020-04-13)3-dimensional (3D) massive multiple-input-multiple-output (MIMO)/full dimensional (FD) MIMO and application of artificial intelligence are two main driving forces for next generation wireless systems. This dissertation focuses on aspects of channel estimation and precoding for 3D massive MIMO systems and application of deep reinforcement learning (DRL) for MIMO broadcast beam synthesis. To be specific, downlink (DL) precoding and power allocation strategies are identified for a time-division-duplex (TDD) multi-cell multi-user massive FD-MIMO network. Utilizing channel reciprocity, DL channel state information (CSI) feedback is eliminated and the DL multi-user MIMO precoding is linked to the uplink (UL) direction of arrival (DoA) estimation through estimation of signal parameters via rotational invariance technique (ESPRIT). Assuming non-orthogonal/non-ideal spreading sequences of the UL pilots, the performance of the UL DoA estimation is analytically characterized and the characterized DoA estimation error is incorporated into the corresponding DL precoding and power allocation strategy. Simulation results verify the accuracy of our analytical characterization of the DoA estimation and demonstrate that the introduced multi-user MIMO precoding and power allocation strategy outperforms existing zero-forcing based massive MIMO strategies. In 3D massive MIMO systems, especially in TDD mode, a base station (BS) relies on the uplink sounding signals from mobile stations to obtain the spatial information for downlink MIMO processing. Accordingly, multi-dimensional parameter estimation of MIMO channel becomes crucial for such systems to realize the predicted capacity gains. In this work, we also study the joint estimation of elevation and azimuth angles as well as the delay parameters for 3D massive MIMO orthogonal frequency division multiplexing (OFDM) systems under a parametric channel modeling. We introduce a matrix-based joint parameter estimation method, and analytically characterize its performance for massive MIMO OFDM systems. Results show that antenna array configuration at the BS plays a critical role in determining the underlying channel estimation performance, and the characterized MSEs match well with the simulated ones. Also, the joint parametric channel estimation outperforms the MMSEbased channel estimation in terms of the correlation between the estimated channel and the real channel. Beamforming in MIMO systems is one of the key technologies for modern wireless communication. Creating wide common beams are essential for enhancing the coverage of cellular network and for improving the broadcast operation for control signals. However, in order to maximize the coverage, patterns for broadcast beams need to be adapted based on the users' movement over time. In this dissertation, we present a MIMO broadcast beam optimization framework using deep reinforcement learning. Our proposed solution can autonomously and dynamically adapt the MIMO broadcast beam parameters based on user' distribution in the network. Extensive simulation results show that the introduced algorithm can achieve the optimal coverage, and converge to the oracle solution for both single cell and multiple cell environment and for both periodic and Markov mobility patterns.
- Action Recognition with Knowledge TransferChoi, Jin-Woo (Virginia Tech, 2021-01-07)Recent progress on deep neural networks has shown remarkable action recognition performance from videos. The remarkable performance is often achieved by transfer learning: training a model on a large-scale labeled dataset (source) and then fine-tuning the model on the small-scale labeled datasets (targets). However, existing action recognition models do not always generalize well on new tasks or datasets because of the following two reasons. i) Current action recognition datasets have a spurious correlation between action types and background scene types. The models trained on these datasets are biased towards the scene instead of focusing on the actual action. This scene bias leads to poor generalization performance. ii) Directly testing the model trained on the source data on the target data leads to poor performance as the source, and target distributions are different. Fine-tuning the model on the target data can mitigate this issue. However, manual labeling small- scale target videos is labor-intensive. In this dissertation, I propose solutions to these two problems. For the first problem, I propose to learn scene-invariant action representations to mitigate the scene bias in action recognition models. Specifically, I augment the standard cross-entropy loss for action classification with 1) an adversarial loss for the scene types and 2) a human mask confusion loss for videos where the human actors are invisible. These two losses encourage learning representations unsuitable for predicting 1) the correct scene types and 2) the correct action types when there is no evidence. I validate the efficacy of the proposed method by transfer learning experiments. I trans- fer the pre-trained model to three different tasks, including action classification, temporal action localization, and spatio-temporal action detection. The results show consistent improvement over the baselines for every task and dataset. I formulate human action recognition as an unsupervised domain adaptation (UDA) problem to handle the second problem. In the UDA setting, we have many labeled videos as source data and unlabeled videos as target data. We can use already exist- ing labeled video datasets as source data in this setting. The task is to align the source and target feature distributions so that the learned model can generalize well on the target data. I propose 1) aligning the more important temporal part of each video and 2) encouraging the model to focus on action, not the background scene, to learn domain-invariant action representations. The proposed method is simple and intuitive while achieving state-of-the-art performance without training on a lot of labeled target videos. I relax the unsupervised target data setting to a sparsely labeled target data setting. Then I explore the semi-supervised video action recognition, where we have a lot of labeled videos as source data and sparsely labeled videos as target data. The semi-supervised setting is practical as sometimes we can afford a little bit of cost for labeling target data. I propose multiple video data augmentation methods to inject photometric, geometric, temporal, and scene invariances to the action recognition model in this setting. The resulting method shows favorable performance on the public benchmarks.
- Advances in Stochastic Geometry for Cellular NetworksSaha, Chiranjib (Virginia Tech, 2020-08-24)The mathematical modeling and performance analysis of cellular networks have seen a major paradigm shift with the application of stochastic geometry. The main purpose of stochastic geometry is to endow probability distributions on the locations of the base stations (BSs) and users in a network, which, in turn, provides an analytical handle on the performance evaluation of cellular networks. To preserve the tractability of analysis, the common practice is to assume complete spatial randomness} of the network topology. In other words, the locations of users and BSs are modeled as independent homogeneous Poisson point processes (PPPs). Despite its usefulness, the PPP-based network models fail to capture any spatial coupling between the users and BSs which is dominant in a multi-tier cellular network (also known as the heterogeneous cellular networks (HetNets)) consisting of macro and small cells. For instance, the users tend to form hotspots or clusters at certain locations and the small cell BSs (SBSs) are deployed at higher densities at these locations of the hotspots in order to cater to the high data demand. Such user-centric deployments naturally couple the locations of the users and SBSs. On the other hand, these spatial couplings are at the heart of the spatial models used in industry for the system-level simulations and standardization purposes. This dissertation proposes fundamentally new spatial models based on stochastic geometry which closely emulate these spatial couplings and are conductive for a more realistic and fine-tuned performance analysis, optimization, and design of cellular networks. First, this dissertation proposes a new class of spatial models for HetNets where the locations of the BSs and users are assumed to be distributed as Poisson cluster process (PCP). From the modeling perspective, the proposed models can capture different spatial couplings in a network topology such as the user hotspots and user BS coupling occurring due to the user-centric deployment of the SBSs. The PCP-based model is a generalization of the state-of-the-art PPP-based HetNet model. This is because the model reduces to the PPP-based model once all spatial couplings in the network are ignored. From the stochastic geometry perspective, we have made contributions in deriving the fundamental distribution properties of PCP, such as the distance distributions and sum-product functionals, which are instrumental for the performance characterization of the HetNets, such as coverage and rate. The focus on more refined spatial models for small cells and users brings to the second direction of the dissertation, which is modeling and analysis of HetNets with millimeter wave (mm-wave) integrated access and backhaul (IAB), an emerging design concept of the fifth generation (5G) cellular networks. While the concepts of network densification with small cells have emerged in the fourth generation (4G) era, the small cells can be realistically deployed with IAB since it solves the problem of high capacity wired backhaul of SBSs by replacing the last-mile fibers with mm-wave links. We have proposed new stochastic geometry-based models for the performance analysis of IAB-enabled HetNets. Our analysis reveals some interesting system-design insights: (1) the IAB HetNets can support a maximum number of users beyond which the data rate drops below the rate of a single-tier macro-only network, and (2) there exists a saturation point of SBS density beyond which no rate gain is observed with the addition of more SBSs. The third and final direction of this dissertation is the combination of machine learning and stochastic geometry to construct a new class of data driven network models which can be used in the performance optimization and design of a network. As a concrete example, we investigate the classical problem of wireless link scheduling where the objective is to choose an optimal subset of simultaneously active transmitters (Tx-s) from a ground set of Tx-s which will maximize the network-wide sum-rate. Since the optimization problem is NP-hard, we replace the computationally expensive heuristic by inferring the point patterns of the active Tx-s in the optimal subset after training a determinantal point process (DPP). Our investigations demonstrate that the DPP is able to learn the spatial interactions of the Tx-s in the optimal subset and gives a reasonably accurate estimate of the optimal subset for any new ground set of Tx-s.
- Age of Information: Fundamentals, Distributions, and ApplicationsAbd-Elmagid, Mohamed Abd-Elaziz (Virginia Tech, 2023-07-11)A typical model for real-time status update systems consists of a transmitter node that generates real-time status updates about some physical process(es) of interest and sends them through a communication network to a destination node. Such a model can be used to analyze the performance of a plethora of emerging Internet of Things (IoT)-enabled real-time applications including healthcare, factory automation, autonomous vehicles, and smart homes, to name a few. The performance of these applications highly depends upon the freshness of the information status at the destination node about its monitored physical process(es). Because of that, the main design objective of such real-time status update systems is to ensure timely delivery of status updates from the transmitter node to the destination node. To measure the freshness of information at the destination node, the Age of Information (AoI) has been introduced as a performance metric that accounts for the generation time of each status update (which was ignored by conventional performance metrics, specifically throughput and delay). Since then, there have been two main research directions in the AoI research area. The first direction aimed to analyze/characterize AoI in different queueing-theoretic models/disciplines, and the second direction was focused on the optimization of AoI in different communication systems that deal with time-sensitive information. However, the prior queueing-theoretic analyses of AoI have mostly been limited to the characterization of the average AoI and the prior studies developing AoI/age-aware scheduling/transmission policies have mostly ignored the energy constraints at the transmitter node(s). Motivated by these limitations, this dissertation develops new queueing-theoretic methods that allow the characterization of the distribution of AoI in several classes of status updating systems as well as novel AoI-aware scheduling policies accounting for the energy constraints at the transmitter nodes (for several settings of communication networks) in the process of decision-making using tools from optimization theory and reinforcement learning. The first part of this dissertation develops a stochastic hybrid system (SHS)-based general framework to facilitate the analysis of characterizing the distribution of AoI in several classes of real-time status updating systems. First, we study a general setting of status updating systems, where a set of source nodes provide status updates about some physical process(es) to a set of monitors. For this setting, the continuous state of the system is formed by the AoI/age processes at different monitors, the discrete state of the system is modeled using a finite-state continuous-time Markov chain, and the coupled evolution of the continuous and discrete states of the system is described by a piecewise linear SHS with linear reset maps. Using the notion of tensors, we derive a system of linear equations for the characterization of the joint moment generating function (MGF) of an arbitrary set of age processes in the network. Afterwards, we study a general setting of gossip networks in which a source node forwards its measurements (in the form of status updates) about some observed physical process to a set of monitoring nodes according to independent Poisson processes. Furthermore, each monitoring node sends status updates about its information status (about the process observed by the source) to the other monitoring nodes according to independent Poisson processes. For this setup, we develop SHS-based methods that allow the characterization of higher-order marginal/joint moments of the age processes in the network. Finally, our SHS-based framework is applied to derive the stationary marginal and joint MGFs for several queueing disciplines and gossip network topologies, using which we derive closed-form expressions for marginal/joint high-order statistics of age processes, such as the variance of each age process and the correlation coefficients between all possible pairwise combinations of age processes. In the second part of this dissertation, our analysis is focused on understanding the distributional properties of AoI in status updating systems powered by energy harvesting (EH). In particular, we consider a multi-source status updating system in which an EH-powered transmitter node has multiple sources generating status updates about several physical processes. The status updates are then sent to a destination node where the freshness of each status update is measured in terms of AoI. The status updates of each source and harvested energy packets are assumed to arrive at the transmitter according to independent Poisson processes, and the service time of each status update is assumed to be exponentially distributed. For this setup, we derive closed-form expressions of MGF of AoI under several queueing disciplines at the transmitter, including non-preemptive and source-agnostic/source-aware preemptive in service strategies. The generality of our analysis is demonstrated by recovering several existing results as special cases. A key insight from our characterization of the distributional properties of AoI is that it is crucial to incorporate the higher moments of AoI in the implementation/optimization of status updating systems rather than just relying on its average (as has been mostly done in the existing literature on AoI). In the third and final part of this dissertation, we employ AoI as a performance metric for several settings of communication networks, and develop novel AoI-aware scheduling policies using tools from optimization theory and reinforcement learning. First, we investigate the role of an unmanned aerial vehicle (UAV) as a mobile relay to minimize the average peak AoI for a source-destination pair. For this setup, we formulate an optimization problem to jointly optimize the UAV's flight trajectory as well as energy and service time allocations for packet transmissions. This optimization problem is subject to the UAV's mobility constraints and the total available energy constraints at the source node and UAV. In order to solve this non-convex problem, we propose an efficient iterative algorithm and establish its convergence analytically. A key insight obtained from our results is that the optimal design of the UAV's flight trajectory achieves significant performance gains especially when the available energy at the source node and UAV is limited and/or when the size of the update packet is large. Afterwards, we study a generic system setup for an IoT network in which radio frequency (RF)-powered IoT devices are sensing different physical processes and need to transmit their sensed data to a destination node. For this generic system setup, we develop a novel reinforcement learning-based framework that characterizes the optimal sampling policy for IoT devices with the objective of minimizing the long-term weighted sum of average AoI values in the network. Our analytical results characterize the structural properties of the age-optimal policy, and demonstrate that it has a threshold-based structure with respect to the AoI values for different processes. They further demonstrate that the structures of the age-optimal and throughput-optimal policies are different. Finally, we analytically characterize the structural properties of the AoI-optimal joint sampling and updating policy for wireless powered communication networks while accounting for the costs of generating status updates in the process of decision-making. Our results demonstrate that the AoI-optimal joint sampling and updating policy has a threshold-based structure with respect to different system state variables.
- Ambient Backscatter Communication Systems: Design, Signal Detection and Bit Error Rate AnalysisDevineni, Jaya Kartheek (Virginia Tech, 2021-09-21)The success of the Internet-of-Things (IoT) paradigm relies on, among other things, developing energy-efficient communication techniques that can enable information exchange among billions of battery-operated IoT devices. With its technological capability of simultaneous information and energy transfer, ambient backscatter is quickly emerging as an appealing solution for this communication paradigm, especially for the links with low data rate requirements. However, many challenges and limitations of ambient backscatter have to be overcome for widespread adoption of the technology in future wireless networks. Motivated by this, we study the design and implementation of ambient backscatter systems, including non-coherent detection and encoding schemes, and investigate techniques such as multiple antenna interference cancellation and frequency-shift backscatter to improve the bit error rate performance of the designed ambient backscatter systems. First, the problem of coherent and semi-coherent ambient backscatter is investigated by evaluating the exact bit error rate (BER) of the system. The test statistic used for the signal detection is based on the averaging of energy of the received signal samples. It is important to highlight that the conditional distributions of this test statistic are derived using the central limit theorem (CLT) approximation in the literature. The characterization of the exact conditional distributions of the test statistic as non-central chi-squared random variable for the binary hypothesis testing problem is first handled in our study, which is a key contribution of this particular work. The evaluation of the maximum likelihood (ML) detection threshold is also explored which is found to be intractable. To overcome this, alternate strategies to approximate the ML threshold are proposed. In addition, several insights for system design and implementation are provided both from analytical and numerical standpoints. Second, the highly appealing non-coherent signal detection is explored in the context of ambient backscatter for a time-selective channel. Modeling the time-selective fading as a first-order autoregressive (AR) process, we implement a new detection architecture at the receiver based on the direct averaging of the received signal samples, which departs significantly from the energy averaging-based receivers considered in the literature. For the proposed setup, we characterize the exact asymptotic BER for both single-antenna (SA) and multi-antenna (MA) receivers, and demonstrate the robustness of the new architecture to timing errors. Our results demonstrate that the direct-link (DL) interference from the ambient power source leads to a BER floor in the SA receiver, which the MA receiver can avoid by estimating the angle of arrival (AoA) of the DL. The analysis further quantifies the effect of improved angular resolution on the BER as a function of the number of receive antennas. Third, the advantages of utilizing Manchester encoding for the data transmission in the context of non-coherent ambient backscatter have been explored. Specifically, encoding is shown to simplify the detection procedure at the receiver since the optimal decision rule is found to be independent of the system parameters. Through extensive numerical results, it is further shown that a backscatter system with Manchester encoding can achieve a signal-to-noise ratio (SNR) gain compared to the commonly used uncoded direct on-off keying (OOK) modulation, when used in conjunction with a multi-antenna receiver employing the direct-link cancellation. Fourth, the BER performance of frequency-shift ambient backscatter, which achieves the self-interference mitigation by spatially separating the reflected backscatter signal from the impending source signal, is investigated. The performance of the system is evaluated for a non-coherent receiver under slow fading in two different network setups: 1) a single interfering link coming from the ambient transmission occurring in the shifted frequency region, and 2) a large-scale network with multiple interfering signals coming from the backscatter nodes and ambient source devices transmitting in the band of interest. Modeling the interfering devices as a two dimensional Poisson point process (PPP), tools from stochastic geometry are utilized to evaluate the bit error rate for the large-scale network setup.
- Application of Machine Learning to Multi Antenna Transmission and Machine Type Resource AllocationEmenonye, Don-Roberts Ugochukwu (Virginia Tech, 2020-09-11)Wireless communication systems is a well-researched area in electrical engineering that has continually evolved over the past decades. This constant evolution and development have led to well-formulated theoretical baselines in terms of reliability and efficiency. However, most communication baselines are derived by splitting the baseband communications into a series of modular blocks like modulation, coding, channel estimation, and orthogonal frequency modulation. Subsequently, these blocks are independently optimized. Although this has led to a very efficient and reliable process, a theoretical verification of the optimality of this design process is not feasible due to the complexities of each individual block. In this work, we propose two modifications to these conventional wireless systems. First, with the goal of designing better space-time block codes for improved reliability, we propose to redesign the transmit and receive blocks of the physical layer. We replace a portion of the transmit chain - from modulation to antenna mapping with a neural network. Similarly, the receiver/decoder is also replaced with a neural network. In other words, the first part of this work focuses on jointly optimizing the transmit and receive blocks to produce a set of space-time codes that are resilient to Rayleigh fading channels. We compare our results to the conventional orthogonal space-time block codes for multiple antenna configurations. The second part of this work investigates the possibility of designing a distributed multiagent reinforcement learning-based multi-access algorithm for machine type communication. This work recognizes that cellular networks are being proposed as a solution for the connectivity of machine type devices (MTDs) and one of the most crucial aspects of scheduling in cellular connectivity is the random access procedure. The random access process is used by conventional cellular users to receive an allocation for the uplink transmissions. This process usually requires six resource blocks. It is efficient for cellular users to perform this process because transmission of cellular data usually requires more than six resource blocks. Hence, it is relatively efficient to perform the random access process in order to establish a connection. Moreover, as long as cellular users maintain synchronization, they do not have to undertake the random access process every time they have data to transmit. They can maintain a connection with the base station through discontinuous reception. On the other hand, the random access process is unsuitable for MTDs because MTDs usually have small-sized packets. Hence, performing the random access process to transmit such small-sized packets is highly inefficient. Also, most MTDs are power constrained, thus they turn off when they have no data to transmit. This means that they lose their connection and can't maintain any form of discontinuous reception. Hence, they perform the random process each time they have data to transmit. Due to these observations, explicit scheduling is undesirable for MTC. To overcome these challenges, we propose bypassing the entire scheduling process by using a grant free resource allocation scheme. In this scheme, MTDs pseudo-randomly transmit their data in random access slots. Note that this results in the possibility of a large number of collisions during the random access slots. To alleviate the resulting congestion, we exploit a heterogeneous network and investigate the optimal MTD-BS association which minimizes the long term congestion experienced in the overall cellular network. Our results show that we can derive the optimal MTD-BS association when the number of MTDs is less than the total number of random access slots.
- Average Link Rate Analysis over Finite Time Horizon in a Wireless NetworkBodepudi, Sai Nisanth (Virginia Tech, 2017-03-30)Instantaneous and ergodic rates are two of the most commonly used metrics to characterize throughput of wireless networks. Roughly speaking, the former characterizes the rate achievable in a given time slot, whereas the latter is useful in characterizing average rate achievable over a long time period. Clearly, the reality often lies somewhere in between these two extremes. Consequently, in this work, we define and characterize a more realistic N-slot average rate (achievable rate averaged over N time slots). This N-slot average rate metric refines the popular notion of ergodic rate, which is defined under the assumption that a user experiences a complete ensemble of channel and interference conditions in the current session (not always realistic, especially for short-lived sessions). The proposed metric is used to study the performance of typical nodes in both ad hoc and downlink cellular networks. The ad hoc network is modeled as a Poisson bipolar network with a fixed distance between each transmitter and its intended receiver. The cellular network is also modeled as a homogeneous Poisson point process. For both these setups, we use tools from stochastic geometry to derive the distribution of N-slot average rate in the following three cases: (i) rate across N time slots is completely correlated, (ii) rate across N time slots is independent and identically distributed, and (iii) rate across N time slots is partially correlated. While the reality is close to third case, the exact characterization of the first two extreme cases exposes certain important design insights.
- Cellular-Assisted Vehicular Communications: A Stochastic Geometric ApproachGuha, Sayantan (Virginia Tech, 2016-02-04)A major component of future communication systems is vehicle-to-vehicle (V2V) communications, in which vehicles along roadways transfer information directly among themselves and with roadside infrastructure. Despite its numerous potential advantages, V2V communication suffers from one inherent shortcoming: the stochastic and time-varying nature of the node distributions in a vehicular ad hoc network (VANET) often leads to loss of connectivity and lower coverage. One possible way to improve this coverage is to allow the vehicular nodes to connect to the more reliable cellular network, especially in cases of loss of connectivity in the vehicular network. In this thesis, we analyze this possibility of boosting performance of VANETs, especially their node coverage, by taking assistance from the cellular network. The spatial locations of the vehicular nodes in a VANET exhibit a unique characteristic: they always lie on roadways, which are predominantly linear but are irregularly placed on a two dimensional plane. While there has been a signifcant work on modeling wireless networks using random spatial models, most of it uses homogeneous planar Poisson Point Process (PPP) to maintain tractability, which is clearly not applicable to VANETs. Therefore, to accurately capture the spatial distribution of vehicles in a VANET, we model the roads using the so called Poisson Line Process and then place vehicles randomly on each road according to a one-dimensional homogeneous PPP. As is usually the case, the locations of the cellular base stations are modeled by a planar two-dimensional PPP. Therefore, in this thesis, we propose a new two-tier model for cellular-assisted VANETs, where the cellular base stations form a planar PPP and the vehicular nodes form a one-dimensional PPP on roads modeled as undirected lines according to a Poisson Line Process. The key contribution of this thesis is the stochastic geometric analysis of a maximum power-based cellular-assisted VANET scheme, in which a vehicle receives information from either the nearest vehicle or the nearest cellular base station, based on the received power. We have characterized the network interference and obtained expressions for coverage probability in this cellular-assisted VANET, and successfully demonstrated that using this switching technique can provide a significant improvement in coverage and thus provide better vehicular network performance in the future. In addition, this thesis also analyzes two threshold-distance based schemes which trade off network coverage for a reduction in additional cellular network load; notably, these schemes also outperform traditional vehicular networks that do not use any cellular assistance. Thus, this thesis mathematically validates the possibility of improving VANET performance using cellular networks.
- Channel Propagation Model for Train to Vehicle Alert System at 5.9 GHz using Dedicated Short Range CommunicationRowe, Christopher D. (Virginia Tech, 2016-10-07)The most common railroad accidents today involve collisions between trains and passenger vehicles at railroad grade crossings [1][2]. Due to the size and speed of a train, these collisions generally result in significant damage and serious injury. Despite recent efforts by projects such as Operation Lifesaver to install safety features at grade crossings, up to 80% of the United States railroad grade crossings are classified as 'unprotected' with no lights, warnings, or crossing gates [2]. Further, from January to September 2012, nearly 10% of all reported vehicle accidents were a result of train-to-vehicle collisions. These collisions also accounted for nearly 95% of all reported fatalities from vehicular accidents [2]. To help provide a more rapidly deployable safety system, advanced dedicated short range communication (DSRC) systems are being developed. DSRC is an emerging technology that is currently being explored by the automotive safety industry for vehicle-to-vehicle (V2V) and vehicle-to-infrastructure (V2I) communications to provide intelligent transportation services (ITS). DSRC uses WAVE protocols and the IEEE 1609 standards. Among the many features of DSRC systems is the ability to sense and then provide an early warning of a potential collision [6]. One potential adaption for this technology is for use as a train-to-vehicle collision warning system for unprotected grade crossings. These new protocols pose an interesting opportunity for enhancing cybersecurity since terrorists will undoubtedly eventually identify these types of mass disasters as targets of opportunity. To provide a thorough channel model of the train to vehicle communication environment that is proposed above, large-scale path loss and small scale fading will both be analyzed to characterize the propagation environment. Measurements were collected at TTCI in Pueblo Colorado to measure the received signal strength in a train to vehicle communication environment. From the received signal strength, different channel models can be developed to characterize the communication environment. Documented metrics include large scale path loss, Rician small scale fading, Delay spread, and Doppler spread. An analysis of the DSRC performance based on Packet Error Rate is also included.
- Comprehensive Performance Analysis of Localizability in Heterogeneous Cellular NetworksBhandari, Tapan (Virginia Tech, 2017-08-03)The availability of location estimates of mobile devices (MDs) is vital for several important applications such as law enforcement, disaster management, battlefield operations, vehicular communication, traffic safety, emergency response, and preemption. While global positioning system (GPS) is usually sufficient in outdoor clear sky conditions, its functionality is limited in urban canyons and indoor locations due to the absence of clear line-of-sight between the MD to be localized and a sufficient number of navigation satellites. In such scenarios, the ubiquitous nature of cellular networks makes them a natural choice for localization of MDs. Traditionally, localization in cellular networks has been studied using system level simulations by fixing base station (BS) geometries. However, with the increasing irregularity of the BS locations (especially due to capacity-driven small cell deployments), the system insights obtained by considering simple BS geometries may not carry over to real-world deployments. This necessitates the need to study localization performance under statistical (random) spatial models, which is the main theme of this work. In this thesis, we use powerful tools from stochastic geometry and point process theory to develop a tractable analytical model to study the localizability (ability to get a location fix) of an MD in single-tier and heterogeneous cellular networks (HetNets). More importantly, we study how availability of information about the location of proximate BSs at the MD impacts localizability. To this end, we derive tractable expressions, bounds, and approximations for the localizability probability of an MD. These expressions depend on several key system parameters, and can be used to infer valuable system insights. Using these expressions, we quantify the gains achieved in localizability of an MD when information about the location of proximate BSs is incorporated in the model. As expected, our results demonstrate that localizability improves with the increase in density of BS deployments.
- Context-Aware Resource Management and Performance Analysis of Millimeter Wave and Sub-6 GHz Wireless NetworksSemiari, Omid (Virginia Tech, 2017-08-28)Emerging wireless networks are foreseen as an integration of heterogeneous spectrum bands, wireless access technologies, and backhaul solutions, as well as a large-scale interconnection of devices, people, and vehicles. Such a heterogeneity will range from the proliferation of multi-tasking user devices with different capabilities such as smartphones and tablets to the deployment of multi-mode access points that can operate over heterogeneous frequency bands spanning both sub-6 GHz microwave and high-frequency millimeter wave (mmW) frequencies bands. This heterogeneous ecosystem will yield new challenges and opportunities for wireless resource management. On the one hand, resource management can exploit user and network-specific context information, such as application type, social metrics, or operator pricing, to develop application-driven, context-aware networks. Similarly, multiple frequency bands can be leveraged to meet the stringent and heterogeneous quality-of-service (QoS) requirements of the new wireless services such as video streaming and interactive gaming. On the other hand, resource management in such heterogeneous, multi-band, and large-scale wireless systems requires distributed frameworks that can effectively utilize all available resources while operating with manageable overhead. The key goal of this dissertation is therefore to develop novel, self-organizing, and low-complexity resource management protocols -- using techniques from matching theory, optimization, and machine learning -- to address critical resource allocation problems for emerging heterogeneous wireless systems while explicitly modeling and factoring diverse network context information. Towards achieving this goal, this dissertation makes a number of key contributions. First, a novel context-aware scheduling framework is developed for enabling dual-mode base stations to efficiently and jointly utilize mmW and microwave frequency resources while maximizing the number of user applications whose stringent delay requirements are satisfied. The results show that the proposed approach will be able to significantly improve the QoS per application and decrease the outage probability. Second, novel solutions are proposed to address both network formation and resource allocation problems in multi-hop wireless backhaul networks that operate at mmW frequencies. The proposed framework motivates collaboration among multiple network operators by resource sharing to reduce the cost of backhauling, while jointly accounting for both wireless channel characteristics and economic factors. Third, a novel framework is proposed to exploit high-capacity mmW communications and device-level caching to minimize handover failures as well as energy consumption by inter-frequency measurements, and to provide seamless mobility in dense heterogeneous mmW-microwave small cell networks (SCNs). Fourth, a new cell association algorithm is proposed, based on matching theory with minimum quota constraints, to optimize load balancing in integrated mmW-microwave networks. Fifth, a novel medium access control (MAC) protocol is proposed to dynamically manage the wireless local area network (WLAN) traffic jointly over the unlicensed 60 GHz mmW and sub-6 GHz bands to maximize the saturation throughput and minimize the delay experienced by users. Finally, a novel resource management approach is proposed to optimize device-to-device (D2D) communications and improve traffic offload in heterogeneous wireless SCNs by leveraging social context information that is dynamically learned by the network. In a nutshell, by providing novel, context-aware, and self-organizing frameworks, this dissertation addresses fundamentally challenging resource management problems that mainly stem from large scale, stringent service requirements, and heterogeneity of next-generation wireless networks.
- Controllable Visual SynthesisAlBahar, Badour A. Sh A. (Virginia Tech, 2023-06-08)Computer graphics has become an integral part of various industries such as entertainment (i.e.,films and content creation), fashion (i.e.,virtual try-on), and video games. Computer graphics has evolved tremendously over the past years. It has shown remarkable image generation improvement from low-quality, pixelated images with limited details to highly realistic images with fine details that can often be mistaken for real images. However, the traditional pipeline of rendering an image in computer graphics is complex and time- consuming. The whole process of creating the geometry, material, and textures requires not only time but also significant expertise. In this work, we aim to replace this complex traditional computer graphics pipeline with a simple machine learning model. This machine learning model can synthesize realistic images without requiring expertise or significant time and effort. Specifically, we address the problem of controllable image synthesis. We propose several approaches that allow the user to synthesize realistic content and manipulate images to achieve their desired goals with ease and flexibility.
- Coordinated Beamforming for Millimeter-wave Terrestrial Peer-to-Peer Communication NetworksMarinkovich, Aaron James Angelo (Virginia Tech, 2020-10-14)Terrestrial mobile peer-to-peer millimeter wave networks will likely use beamforming arrays with narrow beams. Aligning narrow beams is difficult. One consideration for aligning narrow beams is co-channel interference. Beams can be aligned either on a per-link basis where co-channel interference is ignored, or on a global basis where co-channel interference is considered. One way to align beams on a global basis is coordinated beamforming. Coordinated beamforming can be defined as alignment of beams on a global basis, so as to jointly optimize the signal-to-interference-plus-noise ratio (SINR) of all links operating in a network. In this work, we explore coordinated beamforming in peer-to-peer networks and demonstrate its efficacy. Networks with varying numbers of links are simulated in scenarios with and without obstructions. The coordinated beamforming schemes presented in this work significantly improve link SINR statistics in these scenarios. Greater improvement was found in networks with higher numbers of links and in networks in terrain with obstructions.
- Coping Uncertainty in Wireless Network OptimizationLi, Shaoran (Virginia Tech, 2022-10-24)Network optimization plays an important role in 5G/next-G networks, which requires knowledge of network parameters (e.g., channel state information). The majority of existing works assume that all network parameters are either given a prior or can be accurately estimated. However, in many practical scenarios, some parameters are uncertain at the time of allocating resources and can only be modeled by random variables. Further, we only have limited knowledge of those uncertain parameters. For instance, channel gains are not exactly known due to channel estimation errors, network delay, limited feedback, and a lack of cooperation (between networks). Therefore, a practical solution to network optimization must address such uncertainty inside wireless networks. There are three approaches to address such a network uncertainty: stochastic programming, worst-case optimization, and chance-constrained programming (CCP). Among the three, CCP has some unique benefits compared to the other two approaches. Stochastic programming explicitly requires full distribution knowledge, which is usually unavailable in practice. In comparison, CCP can work with various settings of available knowledge such as first and second order statistics, symmetric properties, or limited data samples. Therefore, CCP is more flexible to handle different network settings, which is important to address problems in 5G/next-G networks. Further, worst-case optimization assumes upper or lower bounds (i.e., worst cases) for the uncertain parameters and it is known to be conservative due to its focus on extreme cases. In contrast, CCP allows occasional and controllable violations for some constraints and thus offers much better performance in resource utilization compared to worst-case optimization. The only drawback of CCP is that it may lead to intractability due to its probabilistic formulation and limited knowledge of the underlying random variables. To date, CCP has not been well utilized in the wireless communication and networking community. The goal of this dissertation is to extend the state-of-the-art of CCP techniques and address a number of challenging network optimization problems. This dissertation is correspondingly organized into two parts. In the first part, we assume the uncertain parameters are only known by their mean and covariance (without distribution knowledge). We assume these statistics are rather stationary (i.e., time-invariant for a sufficiently long time) and thus can be accurately estimated. In this setting, we introduce a novel reformulation technique based on the mean and covariance to derive a solution. In the second part, we assume these statistics are time-varying and thus cannot be accurately estimated.In this setting, we employ limited data samples that are collected in a small time window and use them to derive a solution. For the first part, we investigate four research problems based on the mean and covariance of the uncertain parameters: - In the first problem, we study how to maximize spectrum efficiency in underlay coexistence.The interference from all secondary users to each primary user must be kept below a given threshold. However, there is much uncertainty about the channel gains between the primary users and the second users due to a lack of cooperation between them. We formulate probabilistic interference constraints using CCP for the primary users. For tractability, we introduce a novel and powerful reformulation technique called Exact Conic Reformulation (ECR). With limited knowledge of mean and covariance, ECR offers an equivalent reformulation for the intractable chance constraints with tractable deterministic constraints without relaxation errors. After reformulation, we employ linearization techniques to the mixed-integer non-linear problem to reduce the computation complexity. We show that our proposed approach can achieve near-optimal performance and stands as a performance benchmark for the underlay coexistence problem. - To find a solution for the same underlay coexistence problem that can be used in the real world, we need to find a solution in "real-time". The real-time requirement here refers to finding a solution in 125 us (the minimum time slot for small cells in 5G). Our proposed solution has three steps. First, it employs ECR to reformulate the original CCP into a deterministic optimization problem. Then it decomposes the problem and narrows down the search space into a smaller but promising one. By random sampling inside the promising search space and through local search, our proposed solution can meet the 125 us requirement in 5G while achieving 90% optimality on average. - We further apply CCP, predicated on the reformulation technique ECR, to two other problems. * We study the problem of power control in concurrent transmissions. Our objective is to maximize energy efficiency for all transmitter-receiver pairs with capacity requirements. This problem is challenging due to mutual interference among different transmitter-receiver pairs and the uncertain channel gain between any transmitter and receiver. We formulate a CCP and reformulate it into a deterministic problem using ECR. Then we employ Geometric Programming (GP) with a tight approximation to derive a near-optimal solution. * We study task offloading in Mobile Edge Computing (MEC) where the number of processing cycles of a task is unknown until completion. The goal is to minimize the energy consumption of the users while meeting probabilistic deadlines for the tasks. We formulate the probabilistic deadlines into chance constraints and then use ECR to reformulate them into deterministic constraints. We propose a solution that consists of periodic scheduling and schedule updates to choose the offloaded tasks and task-to-processor assignments at the base station. In the second part, we investigate two research problems based on limited data samples of the uncertain parameters: - We study MU-MIMO beamforming based on Channel State Information (CSI). The goal is to derive a beamforming solution---minimizing power consumption at the BS while meeting the probabilistic data rate requirements of the users---by using very limited CSI data samples. For our CCP formulation, we explore the idea of Wasserstein ambiguity set to quantify the distance between the true (but unknown) distribution and the empirical distribution based on the limited data samples. Our proposed solution---Data-Driven Beamforming (D^2BF)---reformulates the CCP into a non-convex deterministic optimization problem based on the properties of Wasserstein ambiguity set. Then D^2BF employs a novel convex approximation to the non-convex deterministic problem, which can be directly solved by commercial solvers. - For a solution to the MU-MIMO beamforming to be useful in the real world, it must meet the "real-time" requirement. Here, the real-time requirement refers to 1 ms, which is one transmission time interval (TTI) under 5G numerology 0. We present ReDBeam---a Real-time Data-driven Beamforming solution for the MU-MIMO beamforming problem (minimizing power consumption while offering probabilistic data rate guarantees to the users) with limited CSI data samples. RedBeam is a parallel algorithm and is purposefully designed to take advantage of the vast parallel processing capability offered by GPU. ReDBeam generates a large number of initial solutions from a promising search space and then refines each solution by a local search. We show that ReDBeam meets the 1 ms real-time requirement on a commercial GPU and is orders of magnitude faster than other state-of-the-art algorithms for the same problem.
- Coverage, Secrecy, and Stability Analysis of Energy Harvesting Wireless NetworksKishk, Mustafa (Virginia Tech, 2018-08-03)Including energy harvesting capability in a wireless network is attractive for multiple reasons. First and foremost, powering base stations with renewable resources could significantly reduce their reliance on the traditional energy sources, thus helping in curtailing the carbon footprint. Second, including this capability in wireless devices may help in increasing their lifetime, which is especially critical for devices for which it may not be easy to charge or replace batteries. This will often be the case for a large fraction of sensors that will form the {em digital skin} of an Internet of Things (IoT) ecosystem. Motivated by these factors, this work studies fundamental performance limitations that appear due to the inherent unreliability of energy harvesting when it is used as a primary or secondary source of energy by different elements of the wireless network, such as mobile users, IoT sensors, and/or base stations. The first step taken towards this objective is studying the joint uplink and downlink coverage of radio-frequency (RF) powered cellular-based IoT. Modeling the locations of the IoT devices and the base stations (BSs) using two independent Poisson point processes (PPPs), the joint uplink/downlink coverage probability is derived. The resulting expressions characterize how different system parameters impact coverage performance. Both mathematical expressions and simulation results show how these system parameters should be tuned in order to achieve the performance of the regularly powered IoT (IoT devices are powered by regular batteries). The placement of RF-powered devices close to the RF sources, to harvest more energy, imposes some concerns on the security of the signals transmitted by these RF sources to their intended receivers. Studying this problem is the second step taken in this dissertation towards better understanding of energy harvesting wireless networks. While these secrecy concerns have been recently addressed for the point-to-point link, it received less attention for the more general networks with randomly located transmitters (RF sources) and RF-powered devices, which is the main contribution in the second part of this dissertation. In the last part of this dissertation, we study the stability of solar-powered cellular networks. We use tools from percolation theory to study percolation probability of energy-drained BSs. We study the effect of two system parameters on that metric, namely, the energy arrival rate and the user density. Our results show the existence of a critical value for the ratio of the energy arrival rate to the user density, above which the percolation probability is zero. The next step to further improve the accuracy of the stability analysis is to study the effect of correlation between the battery levels at neighboring BSs. We provide an initial study that captures this correlation. The main insight drawn from our analysis is the existence of an optimal overlapping coverage area for neighboring BSs to serve each other's users when they are energy-drained.
- Deep Recurrent Q Networks for Dynamic Spectrum Access in Dynamic Heterogeneous Envirnments with Partial ObservationsXu, Yue (Virginia Tech, 2022-09-23)Dynamic Spectrum Access (DSA) has strong potential to address the need for improved spectrum efficiency. Unfortunately, traditional DSA approaches such as simple "sense-and-avoid" fail to provide sufficient performance in many scenarios. Thus, the combination of sensing with deep reinforcement learning (DRL) has been shown to be a promising alternative to previously proposed simplistic approaches. DRL does not require the explicit estimation of transition probability matrices and prohibitively large matrix computations as compared to traditional reinforcement learning methods. Further, since many learning approaches cannot solve the resulting online Partially-Observable Markov Decision Process (POMDP), Deep Recurrent Q-Networks (DRQN) have been proposed to determine the optimal channel access policy via online learning. The fundamental goal of this dissertation is to develop DRL-based solutions to address this POMDP-DSA problem. We mainly consider three aspects in this work: (1) optimal transmission strategies, (2) combined intelligent sensing and transmission strategies, and (c) learning efficiency or online convergence speed. Four key challenges in this problem are (1) the proposed DRQN-based node does not know the other nodes' behavior patterns a priori and must to predict the future channel state based on previous observations; (2) the impact to primary user throughput during learning and even after learning must be limited; (3) resources can be wasted the sensing/observation; and (4) convergence speed must be improved without impacting performance performance. We demonstrate in this dissertation, that the proposed DRQN can learn: (1) the optimal transmission strategy in a variety of environments under partial observations; (2) a sensing strategy that provides near-optimal throughput in different environments while dramatically reducing the needed sensing resources; (3) robustness to imperfect observations; (4) a sufficiently flexible approach that can accommodate dynamic environments, multi-channel transmission and the presence of multiple agents; (5) in an accelerated fashion utilizing one of three different approaches.
- Demand-Side Energy Management in the Smart Grid: Games and ProspectsEl Rahi, Georges (Virginia Tech, 2017-06-26)To mitigate the technical challenges faced by the next-generation smart power grid, in this thesis, novel frameworks are developed for optimizing energy management and trading between power companies and grid consumers, who own renewable energy generators and storage units. The proposed frameworks explicitly account for the effect on demand-side energy management of various consumer-centric grid factors such as the stochastic renewable energy forecast, as well as the varying future valuation of stored energy. In addition, a novel approach is proposed to enhance the resilience of consumer-centric energy trading scenarios by analyzing how a power company can encourage its consumers to store energy, in order to supply the grid’s critical loads, in case of an emergency. The developed energy management mechanisms advance novel analytical tools from game theory, to capture the coupled actions and objectives of the grid actors and from the framework of prospect theory (PT), to capture the irrational behavior of consumers when faced with decision uncertainties. The studied PT and game-based solutions, obtained through analytical and algorithmic characterization, provide grid designers with key insights on the main drivers of each actor’s energy management decision. The ensuing results primarily characterize the difference in trading decisions between rational and irrational consumers, and its impact on energy management. The outcomes of this thesis will therefore allow power companies to design consumer-centric energy management programs that support the sustainable and resilient development of the smart grid by continuously matching supply and demand, and providing emergency energy reserves for critical infrastructure.
- Design of a High Temperature GaN-Based Variable Gain Amplifier for Downhole CommunicationsEhteshamuddin, Mohammed (Virginia Tech, 2017-02-07)The decline of easily accessible reserves pushes the oil and gas industry to explore deeper wells, where the ambient temperature often exceeds 210 °C. The need for high temperature operation, combined with the need for real-time data logging has created a growing demand for robust, high temperature RF electronics. This thesis presents the design of an intermediate frequency (IF) variable gain amplifier (VGA) for downhole communications, which can operate up to an ambient temperature of 230 °C. The proposed VGA is designed using 0.25 μm GaN on SiC high electron mobility transistor (HEMT) technology. Measured results at 230 °C show that the VGA has a peak gain of 27dB at center frequency of 97.5 MHz, and a gain control range of 29.4 dB. At maximum gain, the input P1dB is -11.57 dBm at 230 °C (-3.63 dBm at 25 °C). Input return loss is below 19 dB, and output return loss is below 12 dB across the entire gain control range from 25 °C to 230 °C. The variation with temperature (25 °C to 230 °C) is 1 dB for maximum gain, and 4.7 dB for gain control range. The total power dissipation is 176 mW for maximum gain at 230 °C.
- Design, Deployment and Performance of an Open Source Spectrum Access SystemKikamaze, Shem (Virginia Tech, 2018-11-01)Spectrum sharing is possible, but lacks R & D support for practical solutions that satisfy both the incumbent and secondary or opportunistic users. The author found a lack of an openly available framework supporting experimental research on the performance of a Spectrum Access System (SAS) and propose to build an open-source Software Defined Radio (SDR) based framework. This framework will test different dynamic spectrum scenarios in a wireless testbed. This thesis presents our Spectrum Access System prototype, discusses the design choices and trade-offs and provides a proof of concept implementation. We show that an Internet-accessible CORNET test bed provides the ideal platform for developing and testing the SAS functionality and its building blocks and offerss the hardware and software as a community resource for research and education. This design provides the necessary interfaces for researchers to develop and test their SAS-related modules, waveforms and scenarios.
- Distributed Online Learning in Cognitive Radar NetworksHoward, William Waddell (Virginia Tech, 2023-12-21)Cognitive radar networks (CRNs) were first proposed in 2006 by Simon Haykin, shortly after the introduction of cognitive radar. In order for CRNs to benefit from many of the optimization techniques developed for cognitive radar, they must have some method of coordination and control. Both centralized and distributed architectures have been proposed, and both have drawbacks. This work addresses gaps in the literature by providing the first consideration of the problems that appear when typical cognitive radar tools are extended into networks. This work first examines the online learning techniques available to distributed CRNs, enabling optimal resource allocation without requiring a dedicated communication resource. While this problem has been addressed for single-node cognitive radar, we provide the first consideration of mutual interference in such networks. We go on to propose the first hybrid cognitive radar network structure which takes advantage of central feedback while maintaining the benefits of distributed networks. Then, we go on to investigate a novel problem of timely updating in CRNs, addressing questions of target update frequency and node updating methods. We draw from the Age of Information literature to propose Bellman-optimal solutions. Finally, we introduce the notion of mode control, and develop a way to select between active and passive target observation.