Browsing by Author "Ernst, Joseph M."
Now showing 1 - 8 of 8
Results Per Page
Sort Options
- Automatic Generation of Efficient Parallel Streaming Structures for Hardware ImplementationKoehn, Thaddeus E. (Virginia Tech, 2016-11-30)Digital signal processing systems demand higher computational performance and more operations per second than ever before, and this trend is not expected to end any time soon. Processing architectures must adapt in order to meet these demands. The two techniques most prevalent for achieving throughput constraints are parallel processing and stream processing. By combining these techniques, significant throughput improvements have been achieved. These preliminary results apply to specific applications, and general tools for automation are in their infancy. In this dissertation techniques are developed to automatically generate efficient parallel streaming hardware architectures.
- Enhanced Neural Network Training Using Selective Backpropagation and Forward PropagationBendelac, Shiri (Virginia Tech, 2018-06-22)Neural networks are making headlines every day as the tool of the future, powering artificial intelligence programs and supporting technologies never seen before. However, the training of neural networks can take days or even weeks for bigger networks, and requires the use of super computers and GPUs in academia and industry in order to achieve state of the art results. This thesis discusses employing selective measures to determine when to backpropagate and forward propagate in order to reduce training time while maintaining classification performance. This thesis tests these new algorithms on the MNIST and CASIA datasets, and achieves successful results with both algorithms on the two datasets. The selective backpropagation algorithm shows a reduction of up to 93.3% of backpropagations completed, and the selective forward propagation algorithm shows a reduction of up to 72.90% in forward propagations and backpropagations completed compared to baseline runs of always forward propagating and backpropagating. This work also discusses employing the selective backpropagation algorithm on a modified dataset with disproportional under-representation of some classes compared to others.
- Framework for Evaluating the Severity of Cybervulnerability of a Traffic CabinetErnst, Joseph M.; Michaels, Alan J. (National Academy of Sciences, 2017)The increasing connectivity in transportation infrastructure is driving a need for additional security in transportation systems. For security decisions in a budget-constrained environment, the possible effect of a cyberattack must be numerically characterized. The size of an effect depends on the level of access and the vehicular demand on the intersections being controlled. This paper proposes a framework for better understanding of the levels of access and the effect that can be had in scenarios with varying demand. Simulations are performed on a simplistic corridor to provide numerical examples of the possible effects. The paper concludes that the possibility of some levels of cyberthreat may be acceptable in locations where traffic volumes would not be able to create an unmanageable queue. The more intimate levels of access can cause serious safety concerns by modifying the settings of the traffic controller in ways that encourage red-light running and accidents. The proposed framework can be used by transportation professionals and cybersecurity professionals to prioritize the actions to be taken to secure the infrastructure.
- The Importance of Data in RF Machine LearningClark IV, William Henry (Virginia Tech, 2022-11-17)While the toolset known as Machine Learning (ML) is not new, several of the tools available within the toolset have seen revitalization with improved hardware, and have been applied across several domains in the last two decades. Deep Neural Network (DNN) applications have contributed to significant research within Radio Frequency (RF) problems over the last decade, spurred by results in image and audio processing. Machine Learning (ML), and Deep Learning (DL) specifically, are driven by access to relevant data during the training phase of the application due to the learned feature sets that are derived from vast amounts of similar data. Despite this critical reliance on data, the literature provides insufficient answers on how to quantify the data training needs of an application in order to achieve a desired performance. This dissertation first aims to create a practical definition that bounds the problem space of Radio Frequency Machine Learning (RFML), which we take to mean the application of Machine Learning (ML) as close to the sampled baseband signal directly after digitization as is possible, while allowing for preprocessing when reasonably defined and justified. After constraining the problem to the Radio Frequency Machine Learning (RFML) domain space, an understanding of what kinds of Machine Learning (ML) have been applied as well as the techniques that have shown benefits will be reviewed from the literature. With the problem space defined and the trends in the literature examined, the next goal aims at providing a better understanding for the concept of data quality through quantification. This quantification helps explain how the quality of data: affects Machine Learning (ML) systems with regard to final performance, drives required data observation quantity within that space, and impacts can be generalized and contrasted. With the understanding of how data quality and quantity can affect the performance of a system in the Radio Frequency Machine Learning (RFML) space, an examination of the data generation techniques and realizations from conceptual through real-time hardware implementations are discussed. Consequently, the results of this dissertation provide a foundation for estimating the investment required to realize a performance goal within a Deep Learning (DL) framework as well as a rough order of magnitude for common goals within the Radio Frequency Machine Learning (RFML) problem space.
- Investigation of a Correlation Based Technique for Rapid Phase Synchronization in the DVB-S StandardNguyen, Francis Thanh (Virginia Tech, 2016-01-27)The Direct-Video-Broadcast Satellite (DVB-S) standard is used to provide video and radio to millions of users worldwide. It is designed to provide quasi-error free satellite communications. This thesis discusses some of the limitations of the DVB-S standard, describes some attempts in related work to address these concerns, and purposes a new modification to enhance the performance and reliability of the Direct-Video-Broadcast Satellite (DVB-S) standard by using a correlator in a DVB-S receiver. In many existing receive chains, synchronization speed is slightly delayed because phase ambiguity cannot be determined and corrected until after Viterbi decoding. Using correlation against known symbols before demodulation, the phase ambiguity can be corrected prior to Viterbi decoding, thus reducing the amount of time required to synchronize the received signal. To enhance the correlator's ability to detect the DVB-S synchronization bytes, a two byte, rather than single byte, known sync word is proposed as a modification to the standard. The motivation behind a longer sync word is to improve the standard in high noise environments. A two byte sync word provides more known information for correlation. The resulting correlation peaks are double that of when a single byte is used; this corresponds to about a 3 dB increase in SNR to provide fast signal acquisition and signal tracking in a noisy environment.
- Receiver-Assigned CDMA in Wireless Sensor NetworksPetrosky, Eric Edward (Virginia Tech, 2018-05-23)A new class of Wireless Sensor Networks (WSNs) is emerging within the Internet of Things (IoT) that features extremely high node density, low data rates per node, and high network dependability. Applications such as industrial IoT, factory automation, vehicular networks, aviation, spacecraft and others will soon feature hundreds of low power, low data rate (1-15 kbps) wireless sensor nodes within a limited spatial environment. Existing Medium Access Control (MAC) layer protocols, namely IEEE 802.15.4, may not be suitable for highly dense, low rate networks. A new MAC protocol has been proposed that supports a Receiver-Assigned Code Division Multiple Access (RA-CDMA) physical (PHY) layer multiple access technique, which may enable higher network scalability while maintaining performance and contributing additional robustness. This thesis presents a comparison of the contention mechanisms of IEEE 802.15.4 non- beacon enabled mode and RA-CDMA along with a Matlab simulation framework used for end-to-end simulations of the protocols. Simulations suggest that IEEE 802.15.4 networks begin to break down in terms of throughput, latency, and delivery ratio at a relatively low overall traffic rate compared to RA-CDMA networks. Results show that networks using the proposed RA-CDMA multiple access can support node densities on the order of two to three times higher than IEEE 802.15.4 within the same bandwidth. Furthermore, features of a new MAC layer protocol are proposed that is optimized for RA-CDMA, which could further improve network performance over IEEE 802.15.4. The protocol's simple and lightweight design eliminates significant overhead compared to other protocols while meeting performance requirements, and could further enable the deployment of RA-CDMA WSNs.
- Robust Blind Spectral Estimation in the Presence of Impulsive NoiseKees, Joel Thomas (Virginia Tech, 2019-03-07)Robust nonparametric spectral estimation includes generating an accurate estimate of the Power Spectral Density (PSD) for a given set of data while trying to minimize the bias due to data outliers. Robust nonparametric spectral estimation is applied in the domain of electrical communications and digital signal processing when a PSD estimate of the electromagnetic spectrum is desired (often for the goal of signal detection), and when the spectrum is also contaminated by Impulsive Noise (IN). Power Line Communication (PLC) is an example of a communication environment where IN is a concern because power lines were not designed with the intent to transmit communication signals. There are many different noise models used to statistically model different types of IN, but one popular model that has been used for PLC and various other applications is called the Middleton Class A model, and this model is extensively used in this thesis. The performances of two different nonparametric spectral estimation methods are analyzed in IN: the Welch method and the multitaper method. These estimators work well under the common assumption that the receiver noise is characterized by Additive White Gaussian Noise (AWGN). However, the performance degrades for both of these estimators when they are used for signal detection in IN environments. In this thesis basic robust estimation theory is used to modify the Welch and multitaper methods in order to increase their robustness, and it is shown that the signal detection capabilities in IN is improved when using the modified robust estimators.
- Stacking Ensemble for auto_mlNgo, Khai Thoi (Virginia Tech, 2018-06-13)Machine learning has been a subject undergoing intense study across many different industries and academic research areas. Companies and researchers have taken full advantages of various machine learning approaches to solve their problems; however, vast understanding and study of the field is required for developers to fully harvest the potential of different machine learning models and to achieve efficient results. Therefore, this thesis begins by comparing auto ml with other hyper-parameter optimization techniques. auto ml is a fully autonomous framework that lessens the knowledge prerequisite to accomplish complicated machine learning tasks. The auto ml framework automatically selects the best features from a given data set and chooses the best model to fit and predict the data. Through multiple tests, auto ml outperforms MLP and other similar frameworks in various datasets using small amount of processing time. The thesis then proposes and implements a stacking ensemble technique in order to build protection against over-fitting for small datasets into the auto ml framework. Stacking is a technique used to combine a collection of Machine Learning models’ predictions to arrive at a final prediction. The stacked auto ml ensemble results are more stable and consistent than the original framework; across different training sizes of all analyzed small datasets.