Browsing by Author "Yang, Yaling"
Now showing 1 - 20 of 109
Results Per Page
Sort Options
- Acceleration of Hardware Testing and Validation Algorithms using Graphics Processing UnitsLi, Min (Virginia Tech, 2012-09-17)With the advances of very large scale integration (VLSI) technology, the feature size has been shrinking steadily together with the increase in the design complexity of logic circuits. As a result, the efforts taken for designing, testing, and debugging digital systems have increased tremendously. Although the electronic design automation (EDA) algorithms have been studied extensively to accelerate such processes, some computational intensive applications still take long execution times. This is especially the case for testing and validation. In order tomeet the time-to-market constraints and also to come up with a bug-free design or product, the work presented in this dissertation studies the acceleration of EDA algorithms on Graphics Processing Units (GPUs). This dissertation concentrates on a subset of EDA algorithms related to testing and validation. In particular, within the area of testing, fault simulation, diagnostic simulation and reliability analysis are explored. We also investigated the approaches to parallelize state justification on GPUs, which is one of the most difficult problems in the validation area. Firstly, we present an efficient parallel fault simulator, FSimGP2, which exploits the high degree of parallelism supported by a state-of-the-art graphic processing unit (GPU) with the NVIDIA Compute Unified Device Architecture (CUDA). A novel three-dimensional parallel fault simulation technique is proposed to achieve extremely high computation efficiency on the GPU. The experimental results demonstrate a speedup of up to 4Ã compared to another GPU-based fault simulator. Then, another GPU based simulator is used to tackle an even more computation-intensive task, diagnostic fault simulation. The simulator is based on a two-stage framework which exploits high computation efficiency on the GPU. We introduce a fault pair based approach to alleviate the limited memory capacity on GPUs. Also, multi-fault-signature and dynamic load balancing techniques are introduced for the best usage of computing resources on-board. With continuously feature size scaling and advent of innovative nano-scale devices, the reliability analysis of the digital systems becomes more important nowadays. However, the computational cost to accurately analyze a large digital system is very high. We proposes an high performance reliability analysis tool on GPUs. To achieve highmemory bandwidth on GPUs, two algorithms for simulation scheduling and memory arrangement are proposed. Experimental results demonstrate that the parallel analysis tool is efficient, reliable and scalable. In the area of design validation, we investigate state justification. By employing the swarm intelligence and the power of parallelism on GPUs, we are able to efficiently find a trace that could help us reach the corner cases during the validation of a digital system. In summary, the work presented in this dissertation demonstrates that several applications in the area of digital design testing and validation can be successfully rearchitected to achieve maximal performance on GPUs and obtain significant speedups. The proposed algorithms based on GPU parallelism collectively aim to contribute to improving the performance of EDA tools in Computer aided design (CAD) community on GPUs and other many-core platforms.
- Algorithms and Optimization for Wireless NetworksShi, Yi (Virginia Tech, 2007-10-25)Recently, many new types of wireless networks have emerged for both civil and military applications, such as wireless sensor networks, ad hoc networks, among others. To improve the performance of these wireless networks, many advanced communication techniques have been developed at the physical layer. For both theoretical and practical purposes, it is important for a network researcher to understand the performance limits of these new wireless networks. Such performance limits are important not only for theoretical understanding, but also in that they can be used as benchmarks for the design of distributed algorithms and protocols. However, due to some unique characteristics associated with these networks, existing analytical technologies may not be applied directly. As a result, new theoretical results, along with new mathematical techniques, need to be developed. In this dissertation, we focus on the design of new algorithms and optimization techniques to study theoretical performance limits associated with these new wireless networks. In this dissertation, we mainly focus on sensor networks and ad hoc networks. Wireless sensor networks consist of battery-powered nodes that are endowed with a multitude of sensing modalities. A wireless sensor network can provide in-situ, unattended, high-precision, and real-time observation over a vast area. Wireless ad hoc networks are characterized by the absence of infrastructure support. Nodes in an ad hoc network are able to organize themselves into a multi-hop network. An ad hoc network can operate in a stand-alone fashion or could possibly be connected to a larger network such as the Internet (also known as mesh networks). For these new wireless networks, a number of advanced physical layer techniques, e.g., ultra wideband (UWB), multiple-input and multiple-output (MIMO), and cognitive radio (CR), have been employed. These new physical layer technologies have the potential to improve network performance. However, they also introduce some unique design challenges. For example, CR is capable of reconfiguring RF (on the fly) and switching to newly-selected frequency bands. It is much more advanced than the current multi-channel multi-radio (MC-MR) technology. MC-MR remains hardware-based radio technology: each radio can only operate on a single channel at a time and the number of concurrent channels that can be used at a wireless node is limited by the number of radio interfaces. While a CR can use multiple bands at the same time. In addition, an MC-MR based wireless network typically assumes there is a set of "common channels" available for all nodes in the network. While for CR networks, each node may have a different set of frequency bands based on its particular location. These important differences between MC-MR and CR warrant that the algorithmic design for a CR network is substantially more complex than that under MC-MR. Due to the unique characteristics of these new wireless networks, it is necessary to consider models and constraints at multiple layers (e.g., physical, link, and network) when we explore network performance limits. The formulations of these cross-layer problems are usually in very complex forms and are mathematically challenging. We aim to develop some novel algorithmic design and optimization techniques that provide optimal or near-optimal solutions. The main contributions of this dissertation are summarized as follows. 1. Node lifetime and rate allocation We study the sensor node lifetime problem by considering not only maximizing the time until the first node fails, but also maximizing the lifetimes for all the nodes in the network. For fairness, we maximize node lifetimes under the lexicographic max-min (LMM) criteria. Our contributions are two-fold. First, we develop a polynomial-time algorithm based on a parametric analysis (PA) technique, which has a much lower computational complexity than an existing state-of-the-art approach. We also present a polynomial-time algorithm to calculate the flow routing schedule such that the LMM-optimal node lifetime vector can be achieved. Second, we show that the same approach can be employed to address a different but related problem, called LMM rate allocation problem. More important, we discover an elegant duality relationship between the LMM node lifetime problem and the LMM rate allocation problem. We show that it is sufficient to solve only one of the two problems and that important insights can be obtained by inferring the duality results. 2. Base station placement Base station location has a significant impact on sensor network lifetime. We aim to determine the best location for the base station so as to maximize the network lifetime. For a multi-hop sensor network, this problem is particularly challenging as data routing strategies also affect the network lifetime performance. We present an approximation algorithm that can guarantee (1- ε)-optimal network lifetime performance with any desired error bound ε > 0. The key step is to divide the continuous search space into a finite number of subareas and represent each subarea with a "fictitious cost point" (FCP). We prove that the largest network lifetime achieved by one of these FCPs is (1- ε)-optimal. This approximation algorithm offers a significant reduction in complexity when compared to a state-of-the-art algorithm, and represents the best known result to this problem. 3. Mobile base station The benefits of using a mobile base station to prolong sensor network lifetime have been well recognized. However, due to the complexity of the problem (time-dependent network topology and traffic routing), theoretical performance limits and provably optimal algorithms remain difficult to develop. Our main result hinges upon a novel transformation of the joint base station movement and flow routing problem from the time domain to the space domain. Based on this transformation, we first show that if the base station is allowed to be present only on a set of pre-defined points, then we can find the optimal sojourn time for the base station on each of these points so that the overall network lifetime is maximized. Based on this finding, we show that when the location of the base station is un-constrained (i.e., can move to any point in the two-dimensional plane), we can develop an approximation algorithm for the joint mobile base station and flow routing problem such that the network lifetime is guaranteed to be at least (1- ε) of the maximum network lifetime, where ε can be made arbitrarily small. This is the first theoretical result with performance guarantee on this problem. 4. Spectrum sharing in CR networks Cognitive radio is a revolution in radio technology that promises unprecedented flexibility in radio communications and is viewed as an enabling technology for dynamic spectrum access. We consider a cross-layer design of scheduling and routing with the objective of minimizing the required network-wide radio spectrum usage to support a set of user sessions. Here, scheduling considers how to use a pool of unequal size frequency bands for concurrent transmissions and routing considers how to transmit data for each user session. We develop a near-optimal algorithm based on a sequential fixing (SF) technique, where the determination of scheduling variables is performed iteratively through a sequence of linear programs (LPs). Upon completing the fixing of these scheduling variables, the value of the other variables in the optimization problem can be obtained by solving an LP. 5. Power control in CR networks We further consider the case of variable transmission power in CR networks. Now, our objective is minimizing the total required bandwidth footprint product (BFP) to support a set of user sessions. As a basis, we first develop an interference model for scheduling when power control is performed at each node. This model extends existing so-called protocol models for wireless networks where transmission power is deterministic. As a result, this model can be used for a broad range of problems where power control is part of the optimization space. An efficient solution procedure based on the branch-and-bound framework and convex hull relaxations is proposed to provide (1- ε)-optimal solutions. This is the first theoretical result on this important problem.
- Ambient Backscatter Communication Systems: Design, Signal Detection and Bit Error Rate AnalysisDevineni, Jaya Kartheek (Virginia Tech, 2021-09-21)The success of the Internet-of-Things (IoT) paradigm relies on, among other things, developing energy-efficient communication techniques that can enable information exchange among billions of battery-operated IoT devices. With its technological capability of simultaneous information and energy transfer, ambient backscatter is quickly emerging as an appealing solution for this communication paradigm, especially for the links with low data rate requirements. However, many challenges and limitations of ambient backscatter have to be overcome for widespread adoption of the technology in future wireless networks. Motivated by this, we study the design and implementation of ambient backscatter systems, including non-coherent detection and encoding schemes, and investigate techniques such as multiple antenna interference cancellation and frequency-shift backscatter to improve the bit error rate performance of the designed ambient backscatter systems. First, the problem of coherent and semi-coherent ambient backscatter is investigated by evaluating the exact bit error rate (BER) of the system. The test statistic used for the signal detection is based on the averaging of energy of the received signal samples. It is important to highlight that the conditional distributions of this test statistic are derived using the central limit theorem (CLT) approximation in the literature. The characterization of the exact conditional distributions of the test statistic as non-central chi-squared random variable for the binary hypothesis testing problem is first handled in our study, which is a key contribution of this particular work. The evaluation of the maximum likelihood (ML) detection threshold is also explored which is found to be intractable. To overcome this, alternate strategies to approximate the ML threshold are proposed. In addition, several insights for system design and implementation are provided both from analytical and numerical standpoints. Second, the highly appealing non-coherent signal detection is explored in the context of ambient backscatter for a time-selective channel. Modeling the time-selective fading as a first-order autoregressive (AR) process, we implement a new detection architecture at the receiver based on the direct averaging of the received signal samples, which departs significantly from the energy averaging-based receivers considered in the literature. For the proposed setup, we characterize the exact asymptotic BER for both single-antenna (SA) and multi-antenna (MA) receivers, and demonstrate the robustness of the new architecture to timing errors. Our results demonstrate that the direct-link (DL) interference from the ambient power source leads to a BER floor in the SA receiver, which the MA receiver can avoid by estimating the angle of arrival (AoA) of the DL. The analysis further quantifies the effect of improved angular resolution on the BER as a function of the number of receive antennas. Third, the advantages of utilizing Manchester encoding for the data transmission in the context of non-coherent ambient backscatter have been explored. Specifically, encoding is shown to simplify the detection procedure at the receiver since the optimal decision rule is found to be independent of the system parameters. Through extensive numerical results, it is further shown that a backscatter system with Manchester encoding can achieve a signal-to-noise ratio (SNR) gain compared to the commonly used uncoded direct on-off keying (OOK) modulation, when used in conjunction with a multi-antenna receiver employing the direct-link cancellation. Fourth, the BER performance of frequency-shift ambient backscatter, which achieves the self-interference mitigation by spatially separating the reflected backscatter signal from the impending source signal, is investigated. The performance of the system is evaluated for a non-coherent receiver under slow fading in two different network setups: 1) a single interfering link coming from the ambient transmission occurring in the shifted frequency region, and 2) a large-scale network with multiple interfering signals coming from the backscatter nodes and ambient source devices transmitting in the band of interest. Modeling the interfering devices as a two dimensional Poisson point process (PPP), tools from stochastic geometry are utilized to evaluate the bit error rate for the large-scale network setup.
- Analysis of Firmware Security in Embedded ARM EnvironmentsBrown, Dane Andrew (Virginia Tech, 2019-09-30)Modern enterprise-grade systems with virtually unlimited resources have many options when it comes to implementing state of the art intrusion prevention and detection solutions. These solutions are costly in terms of energy, execution time, circuit board area, and capital. Sustainable Internet of Things devices and power-constrained embedded systems are thus forced to make suboptimal security trade-offs. One such trade-off is the design of architectures which prevent execution of injected shell code, yet have allowed Return Oriented Programming (ROP) to emerge as a more reliable way to execute malicious code following attacks. ROP is a method used to take over the execution of a program by causing the return address of a function to be modified through an exploit vector, then returning to small segments of otherwise innocuous code located in executable memory one after the other to carry out the attacker's aims. We show that the Tiva TM4C123GH6PM microcontroller, which utilizes anARM Cortex-M4F processor, can be fully controlled with this technique. Firmware code is pre-loaded into a ROM on Tiva microcontrollers which can be subverted to erase and rewrite the flash memory where the program resides. That same firmware is searched for a Turing-complete gadget set which allows for arbitrary execution. We then design and evaluate a method for verifying the integrity of firmware on embedded systems, in this case Solid State Drives (SSDs). Some manufacturers make firmware updates available, but their proprietary protections leave end users unable to verify the authenticity of the firmware post installation. This means that attackers who are able to get a malicious firmware version installed on a victim SSD are able to operate with full impunity, as the owner will have no tools for detection. We have devised a method for performing side channel analysis of the current drawn by an SSD, which can compare its behavior while running genuine firmware against its behavior when running modified firmware. We train a binary classifier with samples of both versions and are able to consistently discriminate between genuine firmware and modified firmware, even despite changes in external factors such as temperature and supplied power.
- Android Phone Controlled Beagle Board Based PSCR in a Dynamic Spectrum Access EnvironmentRadhakrishnan, Aravind (Virginia Tech, 2010-09-02)Public Safety Cognitive Radio (PSCR) is a Software Defined Radio(SDR) developed by the Center for Wireless Telecommunications (CWT) at Virginia Tech. PSCR can configure itself to interoperate with any public safety waveform it finds during the scan procedure. It also offers users the capability to scan/classify both analog and digital waveforms. The current PSCR architecture can only run on a general purpose processor and hence is not deployable to the public safety personnel. In the first part of this thesis an Android based control application for the PSCR on a Beagle Board(BB) and the GUI for the control application are developed. The Beagle Board is a low-cost, fanless single board computer that unleashes laptop-like performance and expandability. The Android based Nexus One connected to the Beagle Board via USB is used to control the Beagle Board and enable operations like scan, classify, talk, gateway etc. In addition to the features that exist in the current PSCR a new feature that enables interoperation with P25 (CPFSK modulation) protocol based radios is added. In this effort of porting the PSCR to Beagle Board my contributions are the following (i) communication protocol between the Beagle Board and the Nexus One (ii) PSCR control application on the Android based Nexus One (iii) detection/classification of P25 protocol based radios. In the second part of this thesis, a prototype testbed of a Dynamic Spectrum Access (DSA) broker that uses the Beagle Board PSCR based sensor/classifier is developed. DSA in simple terms is a concept that lets the user without license (secondary user) to a particular frequency access that frequency, when the licensed user (primary user) is not using it. In the proposed testbed we have two Beagle Board based sensor/classifiers that cooperatively scan the spectrum and report the results to the central DSA broker. The DSA broker then identifies the frequency spectrum without primary users and informs the secondary users about the free spectrum. The secondary users can then communicate among each other using the frequency band allocated by the DSA broker. When the primary user enters the spectrum occupied by the secondary user, the DSA broker instructs the secondary user to use a different spectrum. Based on the experiments conducted on the testbed setup in the CWT lab environment, the average time taken by the DSA broker to detect the presence of primary user is 0.636 secs and the average time taken for the secondary user to leave the frequency band that interferes with the primary user is 0.653 secs.
- Applications and Security of Next-Generation, User-Centric Wireless SystemsRamstetter, Jerry Rick; Yang, Yaling; Yao, Danfeng (Daphne) (MDPI, 2010-07-28)Pervasive wireless systems have significantly improved end-users quality of life. As manufacturing costs decrease, communications bandwidth increases, and contextual information is made more readily available, the role of next generation wireless systems in facilitating users daily activities will grow. Unique security and privacy issues exist in these wireless, context-aware, often decentralized systems. For example, the pervasive nature of such systems allows adversaries to launch stealthy attacks against them. In this review paper, we survey several emergent personal wireless systems and their applications. These systems include mobile social networks, active implantable medical devices, and consumer products. We explore each systems usage of contextual information and provide insight into its security vulnerabilities. Where possible, we describe existing solutions for defendingagainst these vulnerabilities. Finally, we point out promising future research directions for improving these systems robustness and security
- Automatic Modulation Classication and Blind Equalization for Cognitive RadiosRamkumar, Barathram (Virginia Tech, 2011-07-28)Cognitive Radio (CR) is an emerging wireless communications technology that addresses the inefficiency of current radio spectrum usage. CR also supports the evolution of existing wireless applications and the development of new civilian and military applications. In military and public safety applications, there is no information available about the signal present in a frequency band and hence there is a need for a CR receiver to identify the modulation format employed in the signal. The automatic modulation classifier (AMC) is an important signal processing component that helps the CR in identifying the modulation format employed in the detected signal. AMC algorithms developed so far can classify only signals from a single user present in a frequency band. In a typical CR scenario, there is a possibility that more than one user is present in a frequency band and hence it is necessary to develop an AMC that can classify signals from multiple users simultaneously. One of the main objectives of this dissertation is to develop robust multiuser AMC's for CR. It will be shown later that multiple antennas are required at the receiver for classifying multiple signals. The use of multiple antennas at the transmitter and receiver is known as a Multi Input Multi Output (MIMO) communication system. By using multiple antennas at the receiver, apart from classifying signals from multiple users, the CR can harness the advantages offered by classical MIMO communication techniques like higher data rate, reliability, and an extended coverage area. While MIMO CR will provide numerous benefits, there are some significant challenges in applying conventional MIMO theory to CR. In this dissertation, open problems in applying classical MIMO techniques to a CR scenario are addressed. A blind equalizer is another important signal processing component that a CR must possess since there are no training or pilot signals available in many applications. In a typical wireless communication environment the transmitted signals are subjected to noise and multipath fading. Multipath fading not only affects the performance of symbol detection by causing inter symbol interference (ISI) but also affects the performance of the AMC. The equalizer is a signal processing component that removes ISI from the received signal, thus improving the symbol detection performance. In a conventional wireless communication system, training or pilot sequences are usually available for designing the equalizer. When a training sequence is available, equalizer parameters are adapted by minimizing the well known cost function called mean square error (MSE). When a training sequence is not available, blind equalization algorithms adapt the parameters of the blind equalizer by minimizing cost functions that exploit the higher order statistics of the received signal. These cost functions are non convex and hence the blind equalizer has the potential to converge to a local minimum. Convergence to a local minimum not only affects symbol detection performance but also affects the performance of the AMC. Robust blind equalizers can be designed if the performance of the AMC is also considered while adapting equalizer parameters. In this dissertation we also develop Single Input Single Output (SISO) and MIMO blind equalizers where the performance of the AMC is also considered while adapting the equalizer parameters.
- An Automatic Solution to Checking Compatibility between Routing Metrics and ProtocolsLiu, Chang (Virginia Tech, 2016-01-19)Routing metrics are important mechanisms to adjust routing protocols' path selection according to the needs of a network system. However, if a routing metric design does not correctly match a particular routing protocol, the protocol may not be able to find an optimal path; routing loops can be produced as well. Thus, the compatibility between routing metrics and routing protocols is increasingly significant with the widespread deployment of wired and wireless networks. However, it is usually difficult to tell whether a routing metric can be perfectly applied to a particular routing protocol. Manually enumerating all possible test cases is very challenging and often infeasible. Therefore, it is highly desirable to have an automatic solution so that one can avoid putting an incompatible combination of routing metric and protocol into use. In this thesis, the above issue has been addressed by developing two automated checking systems for examining the compatibility between real world routing metric and protocol implementations. The automatic routing protocol checking system assumes that some properties of routing metrics are given and the system's job is to check if a new routing protocol is able to achieve optimal, consistent and loop- free routing when it is combined with metrics that hold the given metric properties. In contrast to the protocol checking system, the automatic routing metric checking system assumes that a routing protocol is given and the checking system needs to verify if a new metric implementation will be able to work with this protocol. Experiments have been conducted to verify the correctness of both protocol and metric checking systems.
- Autonomous Link-Adaptive Schemes for Heterogeneous Networks with Congestion FeedbackAhmad, Syed Amaar (Virginia Tech, 2014-03-19)LTE heterogeneous wireless networks promise significant increase in data rates and improved coverage through (i) the deployment of relays and cell densification, (ii) carrier aggregation to enhance bandwidth usage and (iii) by enabling nodes to have dual connectivity. These emerging cellular networks are complex and large systems which are difficult to optimize with centralized control and where mobiles need to balance spectral efficiency, power consumption and fairness constraints. In this dissertation we focus on how decentralized and autonomous mobiles in multihop cellular systems can optimize their own local objectives by taking into account end-to-end or network-wide conditions. We propose several link-adaptive schemes where nodes can adjust their transmit power, aggregate carriers and select points of access to the network (relays and/or macrocell base stations) autonomously, based on both local and global conditions. Under our approach, this is achieved by disseminating the dynamic congestion level in the backhaul links of the points of access. As nodes adapt locally, the congestion levels in the backhaul links can change, which can in turn induce them to also change their adaptation objectives. We show that under our schemes, even with this dynamic congestion feedback, nodes can distributedly converge to a stable selection of transmit power levels and points of access. We also analytically derive the transmit power levels at the equilibrium points for certain cases. Moreover, through numerical results we show that the corresponding system throughput is significantly higher than when nodes adapt greedily following traditional link layer optimization objectives. Given the growing data rate demand, increasing system complexity and the difficulty of implementing centralized cross-layer optimization frameworks, our work simplifies resource allocation in heterogeneous cellular systems. Our work can be extended to any multihop wireless system where the backhaul link capacity is limited and feedback on the dynamic congestion levels at the access points is available.
- Building a Dynamic Spectrum Access Smart Radio With Application to Public Safety Disaster CommunicationsSilvius, Mark D. (Virginia Tech, 2009-08-13)Recent disasters, including the 9/11 terrorist attacks, Hurricane Katrina, the London subway bombings, and the California wildfires, have all highlighted the limitations of current mobile communication systems for public safety first responders. First, in a point-to-point configuration, legacy radio systems used by first responders from differing agencies are often made by competing manufacturers and may use incompatible waveforms or channels. In addition, first responder radio systems, which may be licensed and programmed to operate in frequency bands allocated within their home jurisdiction, may be neither licensed nor available in forward-deployed disaster response locations, resulting in an operational scarcity of usable frequencies. To address these problems, first responders need smart radio solutions which can bridge these disparate legacy radio systems together, can incorporate new smart radio solutions, or can replace these existing aging radios. These smart radios need to quickly find each other and adhere to spectrum usage and access policies. Second, in an infrastructure configuration, legacy radio systems may not operate at all if the existing communications backbone has been destroyed by the disaster event. A communication system which can provide a new, temporary infrastructure or can extend an existing infrastructure into a shaded region is needed. Smart radio nodes that make up the public safety infrastructure again must be able to find each other, adhere to spectrum usage policies, and provide access to other smart radios and legacy public safety radios within their coverage area. This work addresses these communications problems in the following ways. First, it applies cognitive radio technology to develop a smart radio system capable of rapidly adapting itself so it can communicate with existing legacy radio systems or other smart radios using a variety of standard and customized waveforms. These smart radios can also assemble themselves into an ad-hoc network capable of providing a temporary communications backbone within the disaster area, or a network extension to a shaded communications area. Second, this work analyzes and characterizes a series of rendezvous protocols which enable the smart radios to rapidly find each other within a particular coverage area. Third, this work develops a spectrum sharing protocol that enables the smart radios to adhere to spectral policies by sharing spectrum with other primary users of the band. Fourth, the performance of the smart radio architecture, as well as the performance of the rendezvous and spectrum sharing protocols, is evaluated on a smart radio network testbed, which has been assembled in a laboratory setting. Results are compared, when applicable, to existing radio systems and protocols. Finally, this work concludes by briefly discussing how the smart radio technologies developed in this dissertation could be combined to form a public safety communications architecture, applicable to the FCC's stated intent for the 700 MHz Band. In the future, this work will be extended to applications outside of the public safety community, specifically, to communications problems faced by warfighters in the military.
- Building the Foundations and Experiences of 6G and Beyond Networks: A Confluence of THz Systems, Extended Reality (XR), and AI-Native Semantic CommunicationsChaccour, Christina (Virginia Tech, 2023-05-02)The emergence of 6G and beyond networks is set to enable a range of novel services such as personalized highly immersive experiences, holographic teleportation, and human-like intelligent robotic applications. Such applications require a set of stringent sensing, communication, control, and intelligence requirements that mandate a leap in the design, analysis, and optimization of today's wireless networks. First, from a wireless communication standpoint, future 6G applications necessitate extreme requirements in terms of bidirectional data rates, near-zero latency, synchronization, and jitter. Concurrently, such services also need a sensing functionality to track, localize, and sense their environment. Owing to its abundant bandwidth, one may naturally resort to terahertz (THz) frequency bands (0.1 − 10 THz) so as to provide significant wireless capacity gains and enable high-resolution environment sensing. Nonetheless, operating a wireless system at the THz band is constrained by a very uncertain channel which brings forth novel challenges. In essence, these channel limitations lead to unreliable intermittent links ergo the short communication range and the high susceptibility to blockage and molecular absorption. Second, given that emerging wireless services are "intelligence-centric", today's communication links must be transformed from a mere bit-pipe into a brain-like reasoning system. Towards this end, one can exploit the concept of semantic communications, a revolutionary paradigm that promises to transform radio nodes into intelligent agents that can extract the underlying meaning (semantics) or significance in a data stream. However, to date, there has been a lack in holistic, fundamental, and scalable frameworks for building next-generation semantic communication networks based on rigorous and well-defined technical foundations. Henceforth, to panoramically develop the fully-fledged theoretical foundations of future 6G applications and guarantee affluent corresponding experiences, this dissertation thoroughly investigates two thrusts. The first thrust focuses on developing the analytical foundations of THz systems with a focus on network design, performance analysis, and system optimization. First, a novel and holistic vision that articulates the unique role of THz in 6G systems is proposed. This vision exposes the solutions and milestones necessary to unleash THz's true potential in next-generation wireless systems. Then, given that extended reality (XR) will be a staple application of 6G systems, a novel risk and tail-based performance analysis is proposed to evaluate the instantaneous performance of THz bands for specific ultimate virtual reality (VR) services. Here, the results showcase that abundant bandwidth and the molecular absorption effect have only a secondary effect on the reliability compared to the availability of line-of-sight. More importantly, the results highlight that average metrics overlook extreme events and tend to provide false positive performance guarantees. To address the identified challenges of THz systems, a risk-oriented learning-based design that exploits reconfigurable intelligent surfaces (RISs) is proposed so as to optimize the instantaneous reliability. Furthermore, the analytical results are extended to investigate the uplink freshness of augmented reality (AR) services. Here, a novel ruin-based performance is conducted that scrutinizes the peak age of information (PAoI) during extreme events. Next, a novel joint sensing, communication, and artificial intelligence (AI) framework is developed to turn every THz communication link failure into a sensing opportunity, with application to digital world experiences with XR. This framework enables the use of the same waveform, spectrum, and hardware for both sensing and communication functionalities. Furthermore, this sensing input is intelligently processed via a novel joint imputation and forecasting system that is designed via non-autoregressive and transformed-based generative AI tools. This joint system enables fine-graining the sensing input to smaller time slots, predicting missing values, and fore- casting sensing and environmental information about future XR user behavior. Then, a novel joint quality of personal experience (QoPE)-centric and sensing-driven optimization is formulated and solved via deep hysteretic multi-agent reinforcement learning tools. Essentially, this dissertation establishes a solid foundation for the future deployment of THz frequencies in next-generation wireless networks through the proposal of a comprehensive set of principles that draw on the theories of tail and risk, joint sensing and communication designs, and novel AI frameworks. By adopting a multi-faceted approach, this work contributes significantly to the understanding and practical implementation of THz technology, paving the way for its integration into a wide range of applications that demand high reliability, resilience, and an immersive user experience. In the second thrust of this dissertation, the very first theoretical foundations of semantic communication and AI-native wireless networks are developed. In particular, a rigorous and holistic vision of an end-to-end semantic communication network that is founded on novel concepts from AI, causal reasoning, transfer learning, and minimum description length theory is proposed. Within this framework, the dissertation demonstrates that moving from data-driven intelligence towards reasoning-driven intelligence requires identifying association (statistical) and causal logic. Additionally, to evaluate the performance of semantic communication networks, novel key performance indicators metrics that include new "reasoning capacity" measures that could go beyond Shannon's bound to capture the imminent convergence of computing and communication resources. Then, a novel contrastive learning framework is proposed so as to disentangle learnable and memoizable patterns in source data and make the data "semantic-ready". Through the development of a rigorous end-to-end semantic communication network founded on novel concepts from communication theory and AI, along with the proposal of novel performance metrics, this dissertation lays a solid foundation for the advancement of reasoning-driven intelligence in the field of wireless communication and paves the way for a wide range of future applications. Ultimately, the various analytical foundations presented in this dissertation will provide key guidelines that guarantee seamless experiences in future 6G applications, enable a successful deployment of THz wireless systems as a versatile band for integrated communication and sensing, and build future AI-native semantic communication networks.
- A Business Framework for Dynamic Spectrum Access in Cognitive NetworksKelkar, Nikhil Satish (Virginia Tech, 2008-04-21)Traditionally, networking technology has been limited because of the networks inability to adapt resulting in sub-optimal performance. Limited in state, scope and response mechanisms, network elements consisting of nodes, protocol layers and policies have been unable to make intelligent decisions. Modern networks often operate in environments where network resources (e.g. node energy, link quality, bandwidth, etc.), application data (e.g. location of user) and user behaviors (e.g. user mobility and user request pattern) experience changes over time. These changes degrade the network performance and cause service interruption. In recent years, the words "cognitive" and "smart" have become the buzzwords and have been applied to many different networking and communication systems. Cognitive networks are being touted as the next generation network services which will perceive the current network conditions and dynamically adjust their parameters to achieve better productivity. Cognitive radios will provide the end-user intelligence needed for cognitive networks and provide dynamic spectrum access for better spectrum efficiency. We are interested in assessing the practical impact of Cognitive Networks on the Wireless Communication industry. Our goal is to propose a formal business model that will help assess the implications of this new technology in the real world and the practical feasibility of its implementation. We use the layered business model proposed by Ballon [8] which follows a multi-parameter approach by defining four levels on which business models operate and by identifying three critical design parameters on each layer. The Value Network layer identifies the important entities which come into the picture in the light of the new technology. The Functional layer addresses the issue of different architectural implementations of the Cognitive Networks. At the Financial layer, we propose a NPV model which highlights the cost/revenue implications of the technology in the real world and contrasts the different Dynamic Spectrum Access (DSA) schemes from a financial perspective. Finally, the Value Proposition layer seeks to explain the end-user flexibility and efficient spectrum management provided by the use of Cognitive radios and Cognitive networks.
- Characterizing and Detecting Online Deception via Data-Driven MethodsHu, Hang (Virginia Tech, 2020-05-27)In recent years, online deception has become a major threat to information security. Online deception that caused significant consequences is usually spear phishing. Spear-phishing emails come in a very small volume, target a small number of audiences, sometimes impersonate a trusted entity and use very specific content to redirect targets to a phishing website, where the attacker tricks targets sharing their credentials. In this thesis, we aim at measuring the entire process. Starting from phishing emails, we examine anti-spoofing protocols, analyze email services' policies and warnings towards spoofing emails, and measure the email tracking ecosystem. With phishing websites, we implement a powerful tool to detect domain name impersonation and detect phishing pages using dynamic and static analysis. We also analyze credential sharing on phishing websites, and measure what happens after victims share their credentials. Finally, we discuss potential phishing and privacy concerns on new platforms such as Alexa and Google Assistant. In the first part of this thesis (Chapter 3), we focus on measuring how email providers detect and handle forged emails. We also try to understand how forged emails can reach user inboxes by deliberately composing emails. Finally, we check how email providers warn users about forged emails. In the second part (Chapter 4), we measure the adoption of anti-spoofing protocols and seek to understand the reasons behind the low adoption rates. In the third part of this thesis (Chapter 5), we observe that a lot of phishing emails use email tracking techniques to track targets. We collect a large dataset of email messages using disposable email services and measure the landscape of email tracking. In the fourth part of this thesis (Chapter 6), we move on to phishing websites. We implement a powerful tool to detect squatting domains and train a machine learning model to classify phishing websites. In the fifth part (Chapter 7), we focus on the credential leaks. More specifically, we measure what happens after the targets' credentials are leaked. We monitor and measure the potential post-phishing exploiting activities. Finally, with new voice platforms such as Alexa becoming more and more popular, we wonder if new phishing and privacy concerns emerge with new platforms. In this part (Chapter 8), we systematically assess the attack surfaces by measuring sensitive applications on voice assistant systems. My thesis measures important parts of the complete process of online deception. With deeper understandings of phishing attacks, more complete and effective defense mechanisms can be developed to mitigate attacks in various dimensions.
- Circuit Design Methods with Emerging NanotechnologiesZheng, Yexin (Virginia Tech, 2009-12-08)As complementary metal-oxide semiconductor (CMOS) technology faces more and more severe physical barriers down the path of continuously feature size scaling, innovative nano-scale devices and other post-CMOS technologies have been developed to enhance future circuit design and computation. These nanotechnologies have shown promising potentials to achieve magnitude improvement in performance and integration density. The substitution of CMOS transistors with nano-devices is expected to not only continue along the exponential projection of Moore's Law, but also raise significant challenges and opportunities, especially in the field of electronic design automation. The major obstacles that the designers are experiencing with emerging nanotechnology design include: i) the existing computer-aided design (CAD) approaches in the context of conventional CMOS Boolean design cannot be directly employed in the nanoelectronic design process, because the intrinsic electrical characteristics of many nano-devices are not best suited for Boolean implementations but demonstrate strong capability for implementing non-conventional logic such as threshold logic and reversible logic; ii) due to the density and size factors of nano-devices, the defect rate of nanoelectronic system is much higher than conventional CMOS systems, therefore existing design paradigms cannot guarantee design quality and lead to even worse result in high failure ratio. Motivated by the compelling potentials and design challenges of emerging post-CMOS technologies, this dissertation work focuses on fundamental design methodologies to effectively and efficiently achieve high quality nanoscale design. A novel programmable logic element (PLE) is first proposed to explore the versatile functionalities of threshold gates (TGs) and multi-threshold threshold gates (MTTGs). This PLE structure can realize all three- or four-variable logic functions through configuring binary control bits. This is the first single threshold logic structure that provides complete Boolean logic implementation. Based on the PLEs, a reconfigurable architecture is constructed to offer dynamic reconfigurability with little or no reconfiguration overhead, due to the intrinsic self-latching property of nanopipelining. Our reconfiguration data generation algorithm can further reduce the reconfiguration cost. To fully take advantage of such threshold logic design using emerging nanotechnologies, we also developed a combinational equivalence checking (CEC) framework for threshold logic design. Based on the features of threshold logic gates and circuits, different techniques of formulating a given threshold logic in conjunctive normal form (CNF) are introduced to facilitate efficient SAT-based verification. Evaluated with mainstream benchmarks, our hybrid algorithm, which takes into account both input symmetry and input weight order of threshold gates, can efficiently generate CNF formulas in terms of both SAT solving time and CNF generating time. Then the reversible logic synthesis problem is considered as we focus on efficient synthesis heuristics which can provide high quality synthesis results within a reasonable computation time. We have developed a weighted directed graph model for function representation and complexity measurement. An atomic transformation is constructed to associate the function complexity variation with reversible gates. The efficiency of our heuristic lies in maximally decreasing the function complexity during synthesis steps as well as the capability to climb out of local optimums. Thereafter, swarm intelligence, one of the machine learning techniques is employed in the space searching for reversible logic synthesis, which achieves further performance improvement. To tackle the high defect-rate during the emerging nanotechnology manufacturing process, we have developed a novel defect-aware logic mapping framework for nanowire-based PLA architecture via Boolean satisfiability (SAT). The PLA defects of various types are formulated as covering and closure constraints. The defect-aware logic mapping is then solved efficiently by using available SAT solvers. This approach can generate valid logic mapping with a defect rate as high as 20%. The proposed method is universally suitable for various nanoscale PLAs, including AND/OR, NOR/NOR structures, etc. In summary, this work provides some initial attempts to address two major problems confronting future nanoelectronic system designs: the development of electronic design automation tools and the reliability issues. However, there are still a lot of challenging open questions remain in this emerging and promising area. We hope our work can lay down stepstones on nano-scale circuit design optimization through exploiting the distinctive characteristics of emerging nanotechnologies.
- Coexistence of Wireless Networks for Shared Spectrum AccessGao, Bo (Virginia Tech, 2014-09-18)The radio frequency spectrum is not being efficiently utilized partly due to the current policy of allocating the frequency bands to specific services and users. In opportunistic spectrum access (OSA), the ``white spaces'' that are not occupied by primary users (a.k.a. incumbent users) can be opportunistically utilized by secondary users. To achieve this, we need to solve two problems: (i) primary-secondary incumbent protection, i.e., prevention of harmful interference from secondary users to primary users; (ii) secondary-secondary network coexistence, i.e., mitigation of mutual interference among secondary users. The first problem has been addressed by spectrum sensing techniques in cognitive radio (CR) networks and geolocation database services in database-driven spectrum sharing. The second problem is the main focus of this dissertation. To obtain a clear picture of coexistence issues, we propose a taxonomy of heterogeneous coexistence mechanisms for shared spectrum access. Based on the taxonomy, we choose to focus on four typical coexistence scenarios in this dissertation. Firstly, we study sensing-based OSA, when secondary users are capable of employing the channel aggregation technique. However, channel aggregation is not always beneficial due to dynamic spectrum availability and limited radio capability. We propose a channel usage model to analyze the impact of both primary and secondary user behaviors on the efficiency of channel aggregation. Our simulation results show that user demands in both the frequency and time domains should be carefully chosen to minimize expected cumulative delay. Secondly, we study the coexistence of homogeneous CR networks, termed as self-coexistence, when co-channel networks do not rely on inter-network coordination. We propose an uplink soft frequency reuse technique to enable globally power-efficient and locally fair spectrum sharing. We frame the self-coexistence problem as a non-cooperative game, and design a local heuristic algorithm that achieves the Nash equilibrium in a distributed manner. Our simulation results show that the proposed technique is mostly near-optimal and improves self-coexistence in spectrum utilization, power consumption, and intra-cell fairness. Thirdly, we study the coexistence of heterogeneous CR networks, when co-channel networks use different air interface standards. We propose a credit-token-based spectrum etiquette framework that enables spectrum sharing via inter-network coordination. Specifically, we propose a game-auction coexistence framework, and prove that the framework is stable. Our simulation results show that the proposed framework always converges to a near-optimal distributed solution and improves coexistence fairness and spectrum utilization. Fourthly, we study database-driven OSA, when secondary users are mobile. The use of geolocation databases is inadequate in supporting location-aided spectrum sharing if the users are mobile. We propose a probabilistic coexistence framework that supports mobile users by locally adapting their location uncertainty levels in order to find an appropriate trade-off between interference mitigation effectiveness and location update cost. Our simulation results show that the proposed framework can determine and adapt the database query intervals of mobile users to achieve near-optimal interference mitigation with minimal location updates.
- Coexistence of Wireless Systems for Spectrum SharingKim, Seungmo (Virginia Tech, 2017-07-28)Sharing a band of frequencies in the radio spectrum among multiple wireless systems has emerged as a viable solution for alleviating the severe capacity crunch in next-generation wireless mobile networks such as 5th generation mobile networks (5G). Spectrum sharing can be achieved by enabling multiple wireless systems to coexist in a single spectrum band. In this dissertation, we discuss the following coexistence problems in spectrum bands that have recently been raising notable research interest: 5G and Fixed Satellite Service (FSS) at 27.5-28.35 GHz (28 GHz); 5G and Fixed Service (FS) at 71-76 GHz (70 GHz); vehicular communications and Wi-Fi at 5.85-5.925 GHz (5.9 GHz); and mobile broadband communications and radar at 3.55-3.7 GHz (3.5 GHz). The results presented in each of the aforementioned parts show comprehensively that the coexistence methods help achieve spectrum sharing in each of the bands, and therefore contribute to achieve appreciable increase of bandwidth efficiency. The proposed techniques can contribute to making spectrum sharing a viable solution for the ever evolving capacity demands in the wireless communications landscape.
- Cognizant Networks: A Model and Framework for Session-based Communications and Adaptive NetworkingKalim, Umar (Virginia Tech, 2017-08-09)The Internet has made tremendous progress since its inception. The kingpin has been the transmission control protocol (TCP), which supports a large fraction of communication. With the Internet's wide-spread access, users now have increased expectations. The demands have evolved to an extent which TCP was never designed to support. Since network stacks do not provide the necessary functionality for modern applications, developers are forced to implement them over and over again --- as part of the application or supporting libraries. Consequently, application developers not only bear the burden of developing application features but are also responsible for building networking libraries to support sophisticated scenarios. This leads to considerable duplication of effort. The challenge for TCP in supporting modern use cases is mostly due to limiting assumptions, simplistic communication abstractions, and (once expedient) implementation shortcuts. To further add to the complexity, the limited TCP options space is insufficient to support extensibility and thus, contemporary communication patterns. Some argue that radical changes are required to extend the networks functionality; some researchers believe that a clean slate approach is the only path forward. Others suggest that evolution of the network stack is necessary to ensure wider adoption --- by avoiding a flag day. In either case, we see that the proposed solutions have not been adopted by the community at large. This is perhaps because the cost of transition from the incumbent to the new technology outweighs the value offered. In some cases, the limited scope of the proposed solutions limit their value. In other cases, the lack of backward compatibility or significant porting effort precludes incremental adoption altogether. In this dissertation, we focus on the development of a communication model that explicitly acknowledges the context of the conversation and describes (much of) modern communications. We highlight how the communication stack should be able to discover, interact with and use available resources to compose richer communication constructs. The model is able to do so by using session, flow and endpoint abstractions to describe communications between two or more endpoints. These abstractions provide means to the application developers for setting up and manipulating constructs, while the ability to recognize change in the operating context and reconfigure the constructs allows applications to adapt to the changing requirements. The model considers two or more participants to be involved in the conversation and thus enables most modern communication patterns, which is in contrast with the well-established two-participant model. Our contributions also include an implementation of a framework that realizes such communication methods and enables future innovation. We substantiate our claims by demonstrating case studies where we use the proposed abstractions to highlight the gains. We also show how the proposed model may be implemented in a backwards compatible manner, such that it does not break legacy applications, network stacks, or middleboxes in the network infrastructure. We also present use cases to substantiate our claims about backwards compatibility. This establishes that incremental evolution is possible. We highlight the benefits of context awareness in setting up complex communication constructs by presenting use cases and their evaluation. Finally, we show how the communication model may open the door for new and richer communication patterns.
- Communication Infrastructure for the Smart Grid: A Co-Simulation Based Study on Techniques to Improve the Power Transmission System Functions with Efficient Data NetworksLin, Hua (Virginia Tech, 2012-09-27)The vision of the smart grid is predicated upon pervasive use of modern digital communication techniques in today's power system. As wide area measurements and control techniques are being developed and deployed for a more resilient power system, the role of communication networks is becoming prominent. Advanced communication infrastructure provides much wider system observability and enables globally optimal control schemes. Wide area measurement and monitoring with Phasor Measurement Units (PMUs) or Intelligent Electronic Devices (IED) is a growing trend in this context. However, the large amount of data collected by PMUs or IEDs needs to be transferred over the data network to control centers where real-time state estimation, protection, and control decisions are made. The volume and frequency of such data transfers, and real-time delivery requirements mandate that sufficient bandwidth and proper delay characteristics must be ensured for the correct operations. Power system dynamics get influenced by the underlying communication infrastructure. Therefore, extensive integration of power system and communication infrastructure mandates that the two systems be studied as a single distributed cyber-physical system. This dissertation proposes a global event-driven co-simulation framework, which is termed as GECO, for interconnected power system and communication network. GECO can be used as a design pattern for hybrid system simulation with continuous/discrete sub-components. An implementation of GECO is achieved by integrating two software packages: PSLF and NS2 into the framework. Besides, this dissertation proposes and studies a set of power system applications which can be only properly evaluated on a co-simulation framework like GECO, namely communication-based distance relay protection, all-PMU state estimation and PMU-based out-of-step protection. All of them take advantage of interplays between the power grid and the communication infrastructure. The GECO experiments described in this dissertation not only show the efficacy of the GECO framework, but also provide experience on how to go about using GECO in smart grid planning activities.
- A Comprehensive Analysis of Deep Learning for Interference Suppression, Sample and Model Complexity in Wireless SystemsOyedare, Taiwo Remilekun (Virginia Tech, 2024-03-12)The wireless spectrum is limited and the demand for its use is increasing due to technological advancements in wireless communication, resulting in persistent interference issues. Despite progress in addressing interference, it remains a challenge for effective spectrum usage, particularly in the use of license-free and managed shared bands and other opportunistic spectrum access solutions. Therefore, efficient and interference-resistant spectrum usage schemes are critical. In the past, most interference solutions have relied on avoidance techniques and expert system-based mitigation approaches. Recently, researchers have utilized artificial intelligence/machine learning techniques at the physical (PHY) layer, particularly deep learning, which suppress or compensate for the interfering signal rather than simply avoiding it. In addition, deep learning has been utilized by researchers in recent years to address various difficult problems in wireless communications such as, transmitter classification, interference classification and modulation recognition, amongst others. To this end, this dissertation presents a thorough analysis of deep learning techniques for interference classification and suppression, and it thoroughly examines complexity (sample and model) issues that arise from using deep learning. First, we address the knowledge gap in the literature with respect to the state-of-the-art in deep learning-based interference suppression. To account for the limitations of deep learning-based interference suppression techniques, we discuss several challenges, including lack of interpretability, the stochastic nature of the wireless channel, issues with open set recognition (OSR) and challenges with implementation. We also provide a technical discussion of the prominent deep learning algorithms proposed in the literature and also offer guidelines for their successful implementation. Next, we investigate convolutional neural network (CNN) architectures for interference and transmitter classification tasks. In particular, we utilize a CNN architecture to classify interference, investigate model complexity of CNN architectures for classifying homogeneous and heterogeneous devices and then examine their impact on test accuracy. Next, we explore the issues with sample size and sample quality with regards to the training data in deep learning. In doing this, we also propose a rule-of-thumb for transmitter classification using CNN based on the findings from our sample complexity study. Finally, in cases where interference cannot be avoided, it is important to suppress such interference. To achieve this, we build upon autoencoder work from other fields to design a convolutional neural network (CNN)-based autoencoder model to suppress interference thereby ensuring coexistence of different wireless technologies in both licensed and unlicensed bands.
- Cooperation in Wireless NetworksSharma, Sushant (Virginia Tech, 2010-12-16)Spatial diversity, in the form of employing multiple antennas (i.e., MIMO), has proved to be very effective in increasing network capacity and reliability. However, equipping a wireless node with multiple antennas may not be practical, as the footprint of multiple antennas may not fit on a wireless node (particularly on handheld wireless devices). In order to achieve spatial diversity without requiring multiple antennas on the same node, the so-called cooperative communications (CC) has been introduced. Under CC, each node is equipped with only a single antenna and spatial diversity is achieved by exploiting the antennas on other nodes in the network through cooperative relaying. The goal of this dissertation is to maximize throughput at network level through CC at the physical layer. A number of problems are explored in this investigation. The main contributions of this dissertation can be summarized as follows. 1. Optimal Relay Assignment. We first consider a simple CC model where each source-destination pair may employ only a single relay. For this three-node model, the choice of a relay node (among a set of available relay nodes) for a given session is critical in the overall network performance. We study the relay node assignment problem in a cooperative ad hoc network environment, where multiple source-destination pairs compete for the same pool of relay nodes in the network. Our objective is to assign the available relay nodes to different source-destination pairs so as to maximize the minimum data rate among all pairs. We present an optimal polynomial time algorithm, called ORA, that solves this problem. A novel idea in this algorithm is a "linear marking" mechanism, which maintains linear complexity at each iteration. We offer a formal proof of optimality for ORA and use numerical results to demonstrate its capability. 2. Incorporating Network Coding. It has been shown that network coding (NC) can reduce the time-slot overhead when multiple session share the same relay node in CC. Such an approach is called network-coded CC (or NC-CC). Most of the existing works have mainly focused on the benefits of this approach. The potential adverse effect under NC-CC remains unknown. We explore this important problem by introducing the concept of network coding noise (NC noise). We show that due to NC noise, NC may not be always beneficial to CC. We substantiate this important finding in two important scenarios: analog network coding (ANC) in amplify-and-forward (AF) CC, and digital network coding (DNC) in decode-and-forward (DF) CC. We analyze the origin of NC noise via a careful study of signal aggregation at a relay node and signal extraction at a destination node. We derive a closed-form expression for NC noise at each destination node and show that the existence of NC noise could diminish the advantage of NC in CC. Our results shed new light on how to use NC in CC effectively. 3. Session Grouping and Relay Node Selection. When there are multiple sessions in the network, it may be necessary to combine sessions into different groups, and then have each group select the most beneficial relay node for NC-CC. We study this joint grouping and relay node selection problem for NC-CC. By studying matching problems in hypergraphs, we show that this problem is NP-hard. We then propose a distributed and online algorithm to solve this problem. The key idea in our algorithm is to have each neighboring relay node of a newly joined session determine and offer the best group for this session from the groups that it is currently serving; and then to have the source node of this newly joined session select the best group among all received offers. We show that our distributed algorithm has polynomial complexity. Using extensive numerical results, we show that our distributed algorithm is near-optimal and adapts well to online network dynamics. 4. Grouping and Matching for Multi-Relay Cooperation. Existing models of NC-CC consider only single relay node for each session group. We investigate how NC-CC behaves when multiple relay nodes are employed. For a given session, we develop closed form formulas for the mutual information and achievable rate under multi-relay NC-CC. In multi-relay NC-CC, the achievable rate of a session depends on the other sessions in its group as well as the set of relay nodes used for NC-CC. Therefore, we study NC-CC via joint optimization of grouping and matching of session and relay groups in an ad hoc network. Although we show that the joint problem is NP-hard, we develop an efficient polynomial time algorithm for grouping and matching (called G²M). G²M first builds beneficial relay groups for individual sessions. This is followed by multiple iterations during which sessions are combined with other sessions to form larger and better session groups (while corresponding relay groups are merged and updated accordingly). Using extensive numerical results, we show the efficiency and near optimality of our G²M algorithm.