Browsing by Author "Buehrer, R. Michael"
Now showing 1 - 20 of 160
Results Per Page
Sort Options
- Absolute Flux Density Measurement and Associated Instrumentation for Radio Astronomy below 100 MHzTillman, Richard Henry (Virginia Tech, 2016-08-23)This dissertation reports new measurements of the absolute flux densities of the brightest astrophysical sources visible from the northern hemisphere with O[10%] accuracy between 30-78 MHz. These measurements provide additional confidence in the existing understanding of the flux density spectra of these sources in this frequency range. This dissertation also reports new measurements of the antenna temperature due to the diffuse Galactic background between 30-78 MHz, addressing a paucity of existing measurements in this band. These measurements are relevant especially in the context of contemporary interest in radio astronomy and 21 cm cosmology in this frequency range. A new active antenna system and measurement technique were developed to facilitate these measurements. The antennas are simple, thin dipoles, allowing for accurate characterization. Amplification is preceded by notch filters to mitigate interference induced non-linearity. Previous efforts have used well matched antennas. The narrowband antennas and notch filters on the front end create large, frequency varying impedance mismatch that must be accounted for, and we demonstrate how this can be done. We present a novel in situ technique that uses the antenna temperature measurements to improve the calibration of the antennas and internal noise sources.
- Acoustic source localization in 3D complex urban environmentsChoi, Bumsuk (Virginia Tech, 2012-04-30)The detection and localization of important acoustic events in a complex urban environment, such as gunfire and explosions, is critical to providing effective surveillance of military and civilian areas and installations. In a complex environment, obstacles such as terrain or buildings introduce multipath propagations, reflections, and diffractions which make source localization challenging. This dissertation focuses on the problem of source localization in three-dimensional (3D) realistic urban environments. Two different localization techniques are developed to solve this problem: a) Beamforming using a few microphone phased arrays in conjunction with a high fidelity model and b) Fingerprinting using many dispersed microphones in conjunction with a low fidelity model of the environment. For an effective source localization technique using microphone phased arrays, several candidate beamformers are investigated using 2D and corresponding 3D numerical models. Among them, the most promising beamformers are chosen for further investigation using 3D large models. For realistic validation, localization error of the beamformers is analyzed for different levels of uncorrelated noise in the environment. Multiple-array processing is also considered to improve the overall localization performance. The sensitivity of the beamformers to uncertainties that cannot be easily accounted for (e.g. temperature gradient and unmodeled object) is then investigated. It is observed that evaluation in 3D models is critical to assess correctly the potential of the localization technique. The enhanced minimum variance distortionless response (EMVDR) is identified to be the only beamformer that has super-directivity property (i.e. accurate localization capability) and still robust to uncorrelated noise in the environment. It is also demonstrated that the detrimental effect of uncertainties in the modeling of the environment can be alleviated by incoherent multiple arrays. For efficient source localization technique using dispersed microphones in the environment, acoustic fingerprinting in conjunction with a diffused-based energy model is developed as an alternative to the beamforming technique. This approach is much simpler requiring only microphones rather than arrays. Moreover, it does not require an accurate modeling of the acoustic environment. The approach is validated using the 3D large models. The relationship between the localization accuracy and the number of dispersed microphones is investigated. The effect of the accuracy of the model is also addressed. The results show a progressive improvement in the source localization capabilities as the number of microphones increases. Moreover, it is shown that the fingerprints do not need to be very accurate for successful localization if enough microphones are dispersed in the environment.
- Adaptation For Multi-Antenna SystemsPhelps, Christopher Ian (Virginia Tech, 2009-08-03)Previous attempts to adapt MIMO systems in the presence of varying channel conditions typically focus on characterizing the performance of a limited and predefined set of joint MoDem/CoDec and MIMO configurations over a representative set of channel realizations. Other work has attempted to adapt only the MIMO scheme to varying channel conditions without considering modulation format or the channel code used. Finally, attempts to configure the system through direct BER calculation based on channel conditions were also proposed. These methods suffer the problems of dependence on a limited set of simulated curves which may not account for all channel conditions that a real system might see, not configuring all parameters jointly or implicitly requiring channel state information to be fed back to the transmitter. None of these previous attempts have handled both cases where CSIT is available or not while jointly configuring the MoDem, CoDec and multi-antenna scheme. This work consists of two parts, focusing on energy efficiency in the presence of unoccupied frequency bands and on spectrally efficient operation under static frequency assignment. Utilizing minimum Euclidean distances of MoDem constellations and the minimum free Hamming distance metrics for channel codes, we develop distance metrics to describe the MIMO schemes which are considered. A minimum required distance is then determined as a function of desired BER and constellation. Based on the unified set of distance metrics, adaptive algorithms can evaluate the total distance of a signaling scheme, including MoDem, CoDec and MIMO scheme, and then calculate a decision metric based on the total distance and the required distance to meet the desired BER. The proposed system which aims to maximize energy efficiency is able to choose, based on spatial correlation, available channels, CSIT availability, and power amplifier configuration, the appropriate multi-antenna configuration, MoDem and Codec to meet a fixed throughput requirement while maximizing the energy efficiency or robustness of the link. The proposed work assumes that the open channels of a network can be accessed through individually tunable RF chains of the multi-antenna systems. This assumption permits the use of a multi-antenna, multi-channel scheme which sacrifices spatial diversity for frequency diversity. In addition to traditional, single-channel transmit diversity schemes, the adaptive system is also able choose, when more energy efficient, this novel, multi-channel configuration. When focusing on the maximization of spectral efficiency, a more conventional, single-channel model is assumed. In addition to the distance metrics for single-channel diversity schemes, distance metrics are then developed for spatial multiplexing schemes which take into account the interaction of spatial correlation, number of antennas and the rate of the channel code. The adaptive system uses the total distance of the joint configuration of MoDem, CoDec and MIMO scheme to calculate a decision metric which indicates whether the configuration will meet the desired BER. From a list of joint configurations which will meet the desired BER, the adaptive system then chooses the one which maximizes the spectral efficiency.
- Advances in Iterative Probabilistic Processing for Communication ReceiversJakubisin, Daniel Joseph (Virginia Tech, 2016-06-27)As wireless communication systems continue to push the limits of energy and spectral efficiency, increased demands are placed on the capabilities of the receiver. At the same time, the computational resources available for processing received signals will continue to grow. This opens the door for iterative algorithms to play an increasing role in the next generation of communication receivers. In the context of receivers, the goal of iterative probabilistic processing is to approximate maximum a posteriori (MAP) symbol-by-symbol detection of the information bits and estimation of the unknown channel or signal parameters. The sum-product algorithm is capable of efficiently approximating the marginal posterior probabilities desired for MAP detection and provides a unifying framework for the development of iterative receiver algorithms. However, in some applications the sum-product algorithm is computationally infeasible. Specifically, this is the case when both continuous and discrete parameters are present within the model. Also, the complexity of the sum-product algorithm is exponential in the number of variables connected to a particular factor node and can be prohibitive in multi-user and multi-antenna applications. In this dissertation we identify three key problems which can benefit from iterative probabilistic processing, but for which the sum-product algorithm is too complex. They are (1) joint synchronization and detection in multipath channels with emphasis on frame timing, (2) detection in co-channel interference and non-Gaussian noise, and (3) joint channel estimation and multi-signal detection. This dissertation presents the advances we have made in iterative probabilistic processing in order to tackle these problems. The motivation behind the work is to (a) compromise as little as possible on the performance that is achieved while limiting the computational complexity and (b) maintain good theoretical justification to the algorithms that are developed.
- Advances in Stochastic Geometry for Cellular NetworksSaha, Chiranjib (Virginia Tech, 2020-08-24)The mathematical modeling and performance analysis of cellular networks have seen a major paradigm shift with the application of stochastic geometry. The main purpose of stochastic geometry is to endow probability distributions on the locations of the base stations (BSs) and users in a network, which, in turn, provides an analytical handle on the performance evaluation of cellular networks. To preserve the tractability of analysis, the common practice is to assume complete spatial randomness} of the network topology. In other words, the locations of users and BSs are modeled as independent homogeneous Poisson point processes (PPPs). Despite its usefulness, the PPP-based network models fail to capture any spatial coupling between the users and BSs which is dominant in a multi-tier cellular network (also known as the heterogeneous cellular networks (HetNets)) consisting of macro and small cells. For instance, the users tend to form hotspots or clusters at certain locations and the small cell BSs (SBSs) are deployed at higher densities at these locations of the hotspots in order to cater to the high data demand. Such user-centric deployments naturally couple the locations of the users and SBSs. On the other hand, these spatial couplings are at the heart of the spatial models used in industry for the system-level simulations and standardization purposes. This dissertation proposes fundamentally new spatial models based on stochastic geometry which closely emulate these spatial couplings and are conductive for a more realistic and fine-tuned performance analysis, optimization, and design of cellular networks. First, this dissertation proposes a new class of spatial models for HetNets where the locations of the BSs and users are assumed to be distributed as Poisson cluster process (PCP). From the modeling perspective, the proposed models can capture different spatial couplings in a network topology such as the user hotspots and user BS coupling occurring due to the user-centric deployment of the SBSs. The PCP-based model is a generalization of the state-of-the-art PPP-based HetNet model. This is because the model reduces to the PPP-based model once all spatial couplings in the network are ignored. From the stochastic geometry perspective, we have made contributions in deriving the fundamental distribution properties of PCP, such as the distance distributions and sum-product functionals, which are instrumental for the performance characterization of the HetNets, such as coverage and rate. The focus on more refined spatial models for small cells and users brings to the second direction of the dissertation, which is modeling and analysis of HetNets with millimeter wave (mm-wave) integrated access and backhaul (IAB), an emerging design concept of the fifth generation (5G) cellular networks. While the concepts of network densification with small cells have emerged in the fourth generation (4G) era, the small cells can be realistically deployed with IAB since it solves the problem of high capacity wired backhaul of SBSs by replacing the last-mile fibers with mm-wave links. We have proposed new stochastic geometry-based models for the performance analysis of IAB-enabled HetNets. Our analysis reveals some interesting system-design insights: (1) the IAB HetNets can support a maximum number of users beyond which the data rate drops below the rate of a single-tier macro-only network, and (2) there exists a saturation point of SBS density beyond which no rate gain is observed with the addition of more SBSs. The third and final direction of this dissertation is the combination of machine learning and stochastic geometry to construct a new class of data driven network models which can be used in the performance optimization and design of a network. As a concrete example, we investigate the classical problem of wireless link scheduling where the objective is to choose an optimal subset of simultaneously active transmitters (Tx-s) from a ground set of Tx-s which will maximize the network-wide sum-rate. Since the optimization problem is NP-hard, we replace the computationally expensive heuristic by inferring the point patterns of the active Tx-s in the optimal subset after training a determinantal point process (DPP). Our investigations demonstrate that the DPP is able to learn the spatial interactions of the Tx-s in the optimal subset and gives a reasonably accurate estimate of the optimal subset for any new ground set of Tx-s.
- Adversarial RFML: Evading Deep Learning Enabled Signal ClassificationFlowers, Bryse Austin (Virginia Tech, 2019-07-24)Deep learning has become an ubiquitous part of research in all fields, including wireless communications. Researchers have shown the ability to leverage deep neural networks (DNNs) that operate on raw in-phase and quadrature samples, termed Radio Frequency Machine Learning (RFML), to synthesize new waveforms, control radio resources, as well as detect and classify signals. While there are numerous advantages to RFML, this thesis answers the question "is it secure?" DNNs have been shown, in other applications such as Computer Vision (CV), to be vulnerable to what are known as adversarial evasion attacks, which consist of corrupting an underlying example with a small, intelligently crafted, perturbation that causes a DNN to misclassify the example. This thesis develops the first threat model that encompasses the unique adversarial goals and capabilities that are present in RFML. Attacks that occur with direct digital access to the RFML classifier are differentiated from physical attacks that must propagate over-the-air (OTA) and are thus subject to impairments due to the wireless channel or inaccuracies in the signal detection stage. This thesis first finds that RFML systems are vulnerable to current adversarial evasion attacks using the well known Fast Gradient Sign Method originally developed for CV applications. However, these current adversarial evasion attacks do not account for the underlying communications and therefore the adversarial advantage is limited because the signal quickly becomes unintelligible. In order to envision new threats, this thesis goes on to develop a new adversarial evasion attack that takes into account the underlying communications and wireless channel models in order to create adversarial evasion attacks with more intelligible underlying communications that generalize to OTA attacks.
- Algorithms and Architectures for UWB Receiver DesignIbrahim, Jihad E. (Virginia Tech, 2007-01-25)Impulse-based Ultra Wideband (UWB) radio technology has recently gained significant research attention for various indoor ranging, sensing and communications applications due to the large amount of allocated bandwidth and desirable properties of UWB signals (e.g., improved timing resolution or multipath fading mitigation). However, most of the applications have focused on indoor environments where the UWB channel is characterized by tens to hundreds of resolvable multipath components. Such environments introduce tremendous complexity challenges to traditional radio designs in terms of signal detection and synchronization. Additionally, the extremely wide bandwidth and shared nature of the medium means that UWB receivers must contend with a variety of interference sources. Traditional interference mitigation techniques are not amenable to UWB due to the complexity of straight-forward translations to UWB bandwidths. Thus, signal detection, synchronization and interference mitigation are open research issues that must be met in order to exploit the potential benefits of UWB systems. This thesis seeks to address each of these three challenges by first examining and accurately characterizing common approaches borrowed from spread spectrum and then proposing new methods which provide an improved trade-off between complexity and performance.
- Algorithms and Optimization for Wireless NetworksShi, Yi (Virginia Tech, 2007-10-25)Recently, many new types of wireless networks have emerged for both civil and military applications, such as wireless sensor networks, ad hoc networks, among others. To improve the performance of these wireless networks, many advanced communication techniques have been developed at the physical layer. For both theoretical and practical purposes, it is important for a network researcher to understand the performance limits of these new wireless networks. Such performance limits are important not only for theoretical understanding, but also in that they can be used as benchmarks for the design of distributed algorithms and protocols. However, due to some unique characteristics associated with these networks, existing analytical technologies may not be applied directly. As a result, new theoretical results, along with new mathematical techniques, need to be developed. In this dissertation, we focus on the design of new algorithms and optimization techniques to study theoretical performance limits associated with these new wireless networks. In this dissertation, we mainly focus on sensor networks and ad hoc networks. Wireless sensor networks consist of battery-powered nodes that are endowed with a multitude of sensing modalities. A wireless sensor network can provide in-situ, unattended, high-precision, and real-time observation over a vast area. Wireless ad hoc networks are characterized by the absence of infrastructure support. Nodes in an ad hoc network are able to organize themselves into a multi-hop network. An ad hoc network can operate in a stand-alone fashion or could possibly be connected to a larger network such as the Internet (also known as mesh networks). For these new wireless networks, a number of advanced physical layer techniques, e.g., ultra wideband (UWB), multiple-input and multiple-output (MIMO), and cognitive radio (CR), have been employed. These new physical layer technologies have the potential to improve network performance. However, they also introduce some unique design challenges. For example, CR is capable of reconfiguring RF (on the fly) and switching to newly-selected frequency bands. It is much more advanced than the current multi-channel multi-radio (MC-MR) technology. MC-MR remains hardware-based radio technology: each radio can only operate on a single channel at a time and the number of concurrent channels that can be used at a wireless node is limited by the number of radio interfaces. While a CR can use multiple bands at the same time. In addition, an MC-MR based wireless network typically assumes there is a set of "common channels" available for all nodes in the network. While for CR networks, each node may have a different set of frequency bands based on its particular location. These important differences between MC-MR and CR warrant that the algorithmic design for a CR network is substantially more complex than that under MC-MR. Due to the unique characteristics of these new wireless networks, it is necessary to consider models and constraints at multiple layers (e.g., physical, link, and network) when we explore network performance limits. The formulations of these cross-layer problems are usually in very complex forms and are mathematically challenging. We aim to develop some novel algorithmic design and optimization techniques that provide optimal or near-optimal solutions. The main contributions of this dissertation are summarized as follows. 1. Node lifetime and rate allocation We study the sensor node lifetime problem by considering not only maximizing the time until the first node fails, but also maximizing the lifetimes for all the nodes in the network. For fairness, we maximize node lifetimes under the lexicographic max-min (LMM) criteria. Our contributions are two-fold. First, we develop a polynomial-time algorithm based on a parametric analysis (PA) technique, which has a much lower computational complexity than an existing state-of-the-art approach. We also present a polynomial-time algorithm to calculate the flow routing schedule such that the LMM-optimal node lifetime vector can be achieved. Second, we show that the same approach can be employed to address a different but related problem, called LMM rate allocation problem. More important, we discover an elegant duality relationship between the LMM node lifetime problem and the LMM rate allocation problem. We show that it is sufficient to solve only one of the two problems and that important insights can be obtained by inferring the duality results. 2. Base station placement Base station location has a significant impact on sensor network lifetime. We aim to determine the best location for the base station so as to maximize the network lifetime. For a multi-hop sensor network, this problem is particularly challenging as data routing strategies also affect the network lifetime performance. We present an approximation algorithm that can guarantee (1- ε)-optimal network lifetime performance with any desired error bound ε > 0. The key step is to divide the continuous search space into a finite number of subareas and represent each subarea with a "fictitious cost point" (FCP). We prove that the largest network lifetime achieved by one of these FCPs is (1- ε)-optimal. This approximation algorithm offers a significant reduction in complexity when compared to a state-of-the-art algorithm, and represents the best known result to this problem. 3. Mobile base station The benefits of using a mobile base station to prolong sensor network lifetime have been well recognized. However, due to the complexity of the problem (time-dependent network topology and traffic routing), theoretical performance limits and provably optimal algorithms remain difficult to develop. Our main result hinges upon a novel transformation of the joint base station movement and flow routing problem from the time domain to the space domain. Based on this transformation, we first show that if the base station is allowed to be present only on a set of pre-defined points, then we can find the optimal sojourn time for the base station on each of these points so that the overall network lifetime is maximized. Based on this finding, we show that when the location of the base station is un-constrained (i.e., can move to any point in the two-dimensional plane), we can develop an approximation algorithm for the joint mobile base station and flow routing problem such that the network lifetime is guaranteed to be at least (1- ε) of the maximum network lifetime, where ε can be made arbitrarily small. This is the first theoretical result with performance guarantee on this problem. 4. Spectrum sharing in CR networks Cognitive radio is a revolution in radio technology that promises unprecedented flexibility in radio communications and is viewed as an enabling technology for dynamic spectrum access. We consider a cross-layer design of scheduling and routing with the objective of minimizing the required network-wide radio spectrum usage to support a set of user sessions. Here, scheduling considers how to use a pool of unequal size frequency bands for concurrent transmissions and routing considers how to transmit data for each user session. We develop a near-optimal algorithm based on a sequential fixing (SF) technique, where the determination of scheduling variables is performed iteratively through a sequence of linear programs (LPs). Upon completing the fixing of these scheduling variables, the value of the other variables in the optimization problem can be obtained by solving an LP. 5. Power control in CR networks We further consider the case of variable transmission power in CR networks. Now, our objective is minimizing the total required bandwidth footprint product (BFP) to support a set of user sessions. As a basis, we first develop an interference model for scheduling when power control is performed at each node. This model extends existing so-called protocol models for wireless networks where transmission power is deterministic. As a result, this model can be used for a broad range of problems where power control is part of the optimization space. An efficient solution procedure based on the branch-and-bound framework and convex hull relaxations is proposed to provide (1- ε)-optimal solutions. This is the first theoretical result on this important problem.
- Analysis and Design of Cognitive Radio Networks and Distributed Radio Resource Management AlgorithmsNeel, James O'Daniell (Virginia Tech, 2006-09-06)Cognitive radio is frequently touted as a platform for implementing dynamic distributed radio resource management algorithms. In the envisioned scenarios, radios react to measurements of the network state and change their operation according to some goal driven algorithm. Ideally this flexibility and reactivity yields tremendous gains in performance. However, when the adaptations of the radios also change the network state, an interactive decision process is spawned and once desirable algorithms can lead to catastrophic failures when deployed in a network. This document presents techniques for modeling and analyzing the interactions of cognitive radio for the purpose of improving the design of cognitive radio and distributed radio resource management algorithms with particular interest towards characterizing the algorithms' steady-state, convergence, and stability properties. This is accomplished by combining traditional engineering and nonlinear programming analysis techniques with techniques from game to create a powerful model based approach that permits rapid characterization of a cognitive radio algorithm's properties. Insights gleaned from these models are used to establish novel design guidelines for cognitive radio design and powerful low-complexity cognitive radio algorithms. This research led to the creation of a new model of cognitive radio network behavior, an extensive number of new results related to the convergence, stability, and identification of potential and supermodular games, numerous design guidelines, and several novel algorithms related to power control, dynamic frequency selection, interference avoidance, and network formation. It is believed that by applying the analysis techniques and the design guidelines presented in this document, any wireless engineer will be able to quickly develop cognitive radio and distributed radio resource management algorithms that will significantly improve spectral efficiency and network and device performance while removing the need for significant post-deployment site management.
- Analysis and Implementation of a Novel Single Channel Direction Finding Algorithm on a Software Radio PlatformKeaveny, John Joseph (Virginia Tech, 2005-02-11)A radio direction finding (DF) system is an antenna array and a receiver arranged in a combination to determine the azimuth angle of a distant emitter. Basically, all DF systems derive the emitter location from an initial determination of the angle-of-arrival (AOA). Radio direction finding techniques have classically been based on multiple-antenna systems employing multiple receivers. Classic techniques such as MUSIC [1][2] and ESPRIT use simultaneous phase information from each antenna to estimate the angle-of-arrival of the signal of interest. In many scenarios (e.g., hand-held systems), however, multiple receivers are impractical. Thus, single channel techniques are of interest, particularly in mobile scenarios. Although the amount of existing research for single channel DF is considerably less than for multi-channel direction finding, single channel direction finding techniques have been previously investigated. Since many of the single channel direction finding techniques are older analog techniques and have been analyzed in previous work, we will investigate a new single channel direction finding technique that takes specific advantage of digital capabilities. Specifically, we propose a phase-based method that uses a bank of Phase-Locked Loops (PLLs) in combination with an eight-element circular array. Our method is similar to the Pseudo-Doppler method in that it samples antennas in a circular array using a commutative switch. In the proposed approach the sampled data is fed to a bank of PLLs which track the phase on each element. The parallel PLLs are implemented in software and their outputs are fed to a signal processing block that estimates the AOA. This thesis presents the details of the new Phase-Locked Loop (PLL) algorithm and compares its performance to existing single channel DF techniques such as the Watson-Watt and the Pseudo-Doppler techniques. We also describe the implementation of the PLL algorithm on a DRS Signal Solutions, Incorporated (DRS-SS) WJ-8629A Software Definable Receiver with Sunrise™ Technology and present measured performance results.
- Analysis of Advanced Diversity Receivers for Fading ChannelsGaur, Sudhanshu (Virginia Tech, 2003-09-19)Proliferation of new wireless technologies has rekindled the interest on the design, analysis and implementation of suboptimal receiver structures that provide good error probability performance with reduced power consumption and complexity particularly when the order of diversity is large. This thesis presents a unified analytical framework to perform a trade-off study for a class of hybrid generalized selection combining technique for ultra-wideband, spread-spectrum and millimeter-wave communication receiver designs. The thesis also develops an exact mathematical framework to analyze the performance of a dual-diversity equal gain combining (EGC) receiver in correlated Nakagami-m channels, which had defied a simple solution in the past. The framework facilitates efficient evaluation of the mean and variance of coherent EGC output signal-to-noise ratio, outage probability and average symbol error probability for a broad range of digital modulation schemes. A comprehensive study of various dual-diversity techniques with non-independent and non-identical fading statistics is also presented. Finally, the thesis develops some closed-form solutions for a few integrals involving the generalized Marcum Q-function. Integrals of these types often arise in the analysis of multichannel diversity reception of differentially coherent and noncoherent digital communications over Nakagami-m channels. Several other applications are also discussed.
- Analysis of Refractive Effects on Mid-Latitude SuperDARN Velocity MeasurementsDixon, Kristoffer Charles (Virginia Tech, 2014-10-27)First time ionospheric refractive index values have been determined at mid latitudes using frequency switched SuperDARN plasma convection velocity estimates. Previous works have found a disparity between high latitude SuperDARN plasma convection velocities and those made by other devices. It was noted that the scattering volume’s refractive index was being neglected when estimating plasma convection velocities, meaning a correction factor was needed in order to more accurately reflect other measurements. Later work proposed a solution which implemented frequency switching in SuperDARN radars and determined a single correction factor based off of many years of data. We present case study driven research which applies the principles of these previous works to mid latitudes in an attempt to determine the refractive effect in mid latitude SuperDARN plasma convection velocity data by examining frequency switched quiet time ionospheric scatter. It was found that the 1/2 hop ionospheric scatter exhibited little to no measurable refractive effect (n ∼ 1), while the 11/2 hop ionospheric scatter tended to exhibit measurable refractive effects (n ∼ 0.7). This is then expanded to a storm-time 1/2 hop ionospheric scatter case study. It was again found that the refractive effects were nearly negligible (n ∼ 1), indicating that the 1/2 hop plasma convection velocities reported by mid latitude SuperDARN radars only require a very small correction factor, if any at all.
- Analysis of RF Front-End Non-linearity on Symbol Error Rate in the Presence of M-PSK Blocking SignalsDsouza, Jennifer (Virginia Tech, 2017-10-03)Radio frequency (RF) receivers are inherently non-linear due to non-linear components contained within the RF front-end such as the low noise amplifier (LNA) and mixer. When receivers operate in the non-linear region, this will affect the system performance due to intermodulation products, and cross-modulation, to name a few. Intermodulation products are the result of adjacent channel signals that combine and create intermodulation distortion of the received signal. We call these adjacent channel signals blockers. Receiving blockers are unavoidable in wideband receivers and their effect must be analyzed and properly addressed. This M.S. Thesis studies the effect of blockers on system performance, specifically the symbol error rate (SER), as a function of the receiver non-linearity figure and the blocking signal power and modulation format. There have been numerous studies on the effect of non-linearity in the probability of true and false detections in spectrum sensing when blockers are present. There has also been research showing the optimal modulation scheme for effective jamming. However, we are not aware of work analyzing the effect of modulated adjacent channel blockers on communication system performance. The approach taken in this paper is a theoretical derivation followed by numerical analysis aimed to quantify the effect of receiver nonlinearity on communication system performance as a function of (1) receiver characteristics, (2) blocking signal powers, (3) signal and blocker modulation format, and (4) phase-synchronized/non-synchronized blocker reception. The work focuses on M-PSK modulation schemes. For high blocker powers and non-linearity, the Es/No (Eb/No) performance loss can be as high as 4.7 dB for BPSK modulated signal and BPSK modulated blockers when received in sync with the desired signal. When blockers have a random phase offset with respect to the desired signal, the performance degradation is about 2 dB for BPSK modulated desired and blocker signals. It was found that for an BPSK transmitted signal with phase-synchronous blockers, the SER (BER) deteriorates the most when the blocking signals are of the same modulation. The effect is reduced, but still significant, as the modulation order of the signal of interest or the blockers, or both increases.
- The Applicability of the Tap-Delay Line Channel Model to Ultra WidebandYang, Liu (Virginia Tech, 2004-09-15)Ultra-wideband (UWB) communication systems are highly promising because of their capabilities for high data rate information transmission with low power consumption and low interference and their immunity to multipath fading. More importantly, they have the potential to relieve the "spectrum drought" caused by the explosion of wireless systems in the past decade by operating in the same bands as existing narrowband systems. With the extremely large bandwidth of UWB signals, we need to revisit UWB channel modeling. Specifically we need to verify whether or not the traditional tap-line delay channel model is still applicable to UWB. One essential task involved in channel modeling is deconvolving the channel impulse response from the measurement data. Both frequency domain and time domain techniques were studied in this work. After a comparison, we examined a time domain technique known as the CLEAN algorithm for our channel modeling analysis. A detailed analysis of the CLEAN algorithm is given, as it is found that it is sufficient for our application. The impact of per-path pulse distortion due to various mechanisms on the tap-delay line channel model is discussed. It is shown that with cautious interpretation of the channel impulse response, the tap-line delay channel model is still applicable to UWB.
- Application of Machine Learning to Multi Antenna Transmission and Machine Type Resource AllocationEmenonye, Don-Roberts Ugochukwu (Virginia Tech, 2020-09-11)Wireless communication systems is a well-researched area in electrical engineering that has continually evolved over the past decades. This constant evolution and development have led to well-formulated theoretical baselines in terms of reliability and efficiency. However, most communication baselines are derived by splitting the baseband communications into a series of modular blocks like modulation, coding, channel estimation, and orthogonal frequency modulation. Subsequently, these blocks are independently optimized. Although this has led to a very efficient and reliable process, a theoretical verification of the optimality of this design process is not feasible due to the complexities of each individual block. In this work, we propose two modifications to these conventional wireless systems. First, with the goal of designing better space-time block codes for improved reliability, we propose to redesign the transmit and receive blocks of the physical layer. We replace a portion of the transmit chain - from modulation to antenna mapping with a neural network. Similarly, the receiver/decoder is also replaced with a neural network. In other words, the first part of this work focuses on jointly optimizing the transmit and receive blocks to produce a set of space-time codes that are resilient to Rayleigh fading channels. We compare our results to the conventional orthogonal space-time block codes for multiple antenna configurations. The second part of this work investigates the possibility of designing a distributed multiagent reinforcement learning-based multi-access algorithm for machine type communication. This work recognizes that cellular networks are being proposed as a solution for the connectivity of machine type devices (MTDs) and one of the most crucial aspects of scheduling in cellular connectivity is the random access procedure. The random access process is used by conventional cellular users to receive an allocation for the uplink transmissions. This process usually requires six resource blocks. It is efficient for cellular users to perform this process because transmission of cellular data usually requires more than six resource blocks. Hence, it is relatively efficient to perform the random access process in order to establish a connection. Moreover, as long as cellular users maintain synchronization, they do not have to undertake the random access process every time they have data to transmit. They can maintain a connection with the base station through discontinuous reception. On the other hand, the random access process is unsuitable for MTDs because MTDs usually have small-sized packets. Hence, performing the random access process to transmit such small-sized packets is highly inefficient. Also, most MTDs are power constrained, thus they turn off when they have no data to transmit. This means that they lose their connection and can't maintain any form of discontinuous reception. Hence, they perform the random process each time they have data to transmit. Due to these observations, explicit scheduling is undesirable for MTC. To overcome these challenges, we propose bypassing the entire scheduling process by using a grant free resource allocation scheme. In this scheme, MTDs pseudo-randomly transmit their data in random access slots. Note that this results in the possibility of a large number of collisions during the random access slots. To alleviate the resulting congestion, we exploit a heterogeneous network and investigate the optimal MTD-BS association which minimizes the long term congestion experienced in the overall cellular network. Our results show that we can derive the optimal MTD-BS association when the number of MTDs is less than the total number of random access slots.
- The application of multiuser detection to cellular CDMABuehrer, R. Michael (Virginia Tech, 1996-06-19)This research investigates the application of multiuser detection to Code Division Multiple Access for cellular communications. This investigation focuses on the use of multiuser receivers at the base station of mobile radio systems. The first two chapters are dedicated to multiuser detection in general. An extensive literature survey is performed on the research concerning multiuser receivers to date. Six major receiver structures are chosen for extensive simulation studies. The bit error rate performance of these receivers is investigated in several system environments. Additionally, practical issues are considered such as computational complexity and robustness to code tracking errors. From this work, one receiver structure is identified for further study, namely multistage interference cancellation. The theoretical performance of this receiver is analyzed using a standard Gaussian Approximation and an Improved Gaussian Approximation for AWGN and fading environments. Additionally, the resistance of the receiver to interference energy levels is explored. Parameter estimation is an important issue for interference cancellation. Simple methods of improving parameter estimation are examined, as is the effect of parameter estimation error on system performance. A baseband hardware implementation is detailed and several design challenges are presented. Results are given for the performance of the implemented receiver and shown to match well with theory and computer simulation. Finally, the implications of this research are discussed.
- Applications of Sensor Fusion to Classification, Localization and MappingAbdelbar, Mahi Othman Helmi Mohamed Helmi Hussein (Virginia Tech, 2018-04-30)Sensor Fusion is an essential framework in many Engineering fields. It is a relatively new paradigm for integrating data from multiple sources to synthesize new information that in general would not have been feasible from the individual parts. Within the wireless communications fields, many emerging technologies such as Wireless Sensor Networks (WSN), the Internet of Things (IoT), and spectrum sharing schemes, depend on large numbers of distributed nodes working collaboratively and sharing information. In addition, there is a huge proliferation of smartphones in the world with a growing set of cheap powerful embedded sensors. Smartphone sensors can collectively monitor a diverse range of human activities and the surrounding environment far beyond the scale of what was possible before. Wireless communications open up great opportunities for the application of sensor fusion techniques at multiple levels. In this dissertation, we identify two key problems in wireless communications that can greatly benefit from sensor fusion algorithms: Automatic Modulation Classification (AMC) and indoor localization and mapping based on smartphone sensors. Automatic Modulation Classification is a key technology in Cognitive Radio (CR) networks, spectrum sharing, and wireless military applications. Although extensively researched, performance of signal classification at a single node is largely bounded by channel conditions which can easily be unreliable. Applying sensor fusion techniques to the signal classification problem within a network of distributed nodes is presented as a means to overcome the detrimental channel effects faced by single nodes and provide more reliable classification performance. Indoor localization and mapping has gained increasing interest in recent years. Currently-deployed positioning techniques, such as the widely successful Global Positioning System (GPS), are optimized for outdoor operation. Providing indoor location estimates with high accuracy up to the room or suite level is an ongoing challenge. Recently, smartphone sensors, specially accelerometers and gyroscopes, provided attractive solutions to the indoor localization problem through Pedestrian Dead-Reckoning (PDR) frameworks, although still suffering from several challenges. Sensor fusion algorithms can be applied to provide new and efficient solutions to the indoor localization problem at two different levels: fusion of measurements from different sensors in a smartphone, and fusion of measurements from several smartphones within a collaborative framework.
- Approaches to Joint Base Station Selection and Adaptive Slicing in Virtualized Wireless NetworksTeague, Kory Alan (Virginia Tech, 2018-11-19)Wireless network virtualization is a promising avenue of research for next-generation 5G cellular networks. This work investigates the problem of selecting base stations to construct virtual networks for a set of service providers, and adaptive slicing of the resources between the service providers to satisfy service provider demands. A two-stage stochastic optimization framework is introduced to solve this problem, and two methods are presented for approximating the stochastic model. The first method uses a sampling approach applied to the deterministic equivalent program of the stochastic model. The second method uses a genetic algorithm for base station selection and adaptively slicing via a single-stage linear optimization problem. A number of scenarios are simulated using a log-normal model designed to emulate demand from real world cellular networks. Simulations indicate that the first approach can provide a reasonably tight solution, but is constrained as the time expense grows exponentially with the number of parameters. The second approach provides a significant improvement in run time with the introduction of marginal error.
- Approaches to Multiple-source Localization and Signal ClassificationReed, Jesse (Virginia Tech, 2009-05-05)Source localization with a wireless sensor network remains an important area of research as the number of applications with this problem increases. This work considers the problem of source localization by a network of passive wireless sensors. The primary means by which localization is achieved is through direction-finding at each sensor, and in some cases, range estimation as well. Both single and multiple-target scenarios are considered in this research. In single-source environments, a solution that outperforms the classic least squared error estimation technique by combining direction and range estimates to perform localization is presented. In multiple-source environments, two solutions to the complex data association problem are addressed. The first proposed technique offers a less complex solution to the data association problem than a brute-force approach at the expense of some degradation in performance. For the second technique, the process of signal classification is considered as another approach to the data association problem. Environments in which each signal possesses unique features can be exploited to separate signals at each sensor by their characteristics, which mitigates the complexity of the data association problem and in many cases improves the accuracy of the localization. Two approaches to signal-selective localization are considered in this work. The first is based on the well-known cyclic MUSIC algorithm, and the second combines beamforming and modulation classification. Finally, the implementation of a direction-finding system is discussed. This system includes a uniform circular array as a radio frequency front end and the universal software radio peripheral as a data processor.
- Array Processing for Mobile Wireless Communication in the 60 GHz BandJakubisin, Daniel J. (Virginia Tech, 2012-11-09)In 2001, the Federal Communications Commission made available a large block of spectrum known as the 60 GHz band. The 60 GHz band is attractive because it provides the opportunity of multi-Gbps data rates with unlicensed commercial use. One of the main challenges facing the use of this band is poor propagation characteristics including high path loss and strong attenuation due to oxygen absorption. Antenna arrays have been proposed as a means of combating these effects. This thesis provides an analysis of array processing for communication systems operating in the 60 GHz band. Based on measurement campaigns at 60 GHz, deterministic modeling of the channel through ray tracing is proposed. We conduct a site-specific study using ray tracing to model an outdoor and an indoor environment on the Virginia Tech campus. Because arrays are required for antenna gain and adaptability, we explore the use of arrays as a form of equalization in the presence of channel-induced intersymbol interference. The first contribution of this thesis is to establish the expected performance achieved by arrays in the outdoor environment. The second contribution is to analyze the performance of adaptive algorithms applied to array processing in mobile indoor and outdoor environments.