Wireless Sensing and Fusion using Deep Neural Networks

dc.contributor.authorYu, Jianyuanen
dc.contributor.committeechairBuehrer, Richard M.en
dc.contributor.committeememberReed, Jeffrey H.en
dc.contributor.committeememberLiu, Lingjiaen
dc.contributor.committeememberLou, Wenjingen
dc.contributor.committeememberHuang, Jia-Binen
dc.contributor.departmentElectrical Engineeringen
dc.date.accessioned2022-09-21T08:00:42Zen
dc.date.available2022-09-21T08:00:42Zen
dc.date.issued2022-09-20en
dc.description.abstractDeep Neural Networks (DNNs) have been proposed to solve many difficult problems within the context of wireless sensing. Indoor localization and human activity recognition (HAR) are two major applications of wireless sensing. However, current fingerprint-based localization methods require massive amounts of labeled data and suffer severe performance degradation in NLOS environments. To address this challenge, we first apply DNNs to multi-modal wireless signals, including Wi-Fi, an inertial measurement unit (IMU), and ultra-wideband (UWB). By formulating localization as a multi-modal sequence regression problem, a multi- stream recurrent fusion method is developed to combine the current hidden state of each modality. This is done in the context of recurrent neural networks while accounting for the modality uncertainty directly learned from its immediate past states. The proposed method was evaluated on a large-scale open dataset and compared with a wide range of baseline methods. It is shown that the proposed approach has an average error below 20 centimeters, which is nearly three times better than classic methods. Second, in the context of activity recognition, we propose a multi-band WiFi fusion frame- work that hierarchically combines the features of sub-6 GHz channel state information (CSI) and the beam signal-to-noise ratio (SNR) at 60 GHz at different granularity levels. Specifically, we introduce three fusion methods: simple input fusion, feature fusion, and a more customized feature permutation that accounts for the granularity correspondence between the CSI and beam SNR measurements for task-specific sensing. To mitigate the problem of limited labeled training data, we further propose an autoencoder-based unsupervised fusion network consisting of separate encoders and decoders for the CSI and beam SNR. The effectiveness of the framework is thoroughly validated using an in-house experimental platform which includes indoor localization, pose recognition, and occupancy sensing. Finally, in the context of array processing, we solve the Model order estimation (MOE) problem, a prerequisite for Direction of Arrival (DoA) estimation in the presence of correlated multipath, a well-known difficult problem. Due to the limits imposed by array geometry, it is not possible to estimate spatial parameters for an arbitrary number of sources; an estimate of the signal model is required. While classic methods fail at MOE in the presence of correlated multi-path interference, we show that data-driven supervised learning models can meet this challenge. In particular, we propose the application of Residual Neural Net- works (ResNets), with grouped symmetric kernel filters to provide an accuracy over 95%, and a weighted loss function to eliminate the underestimation error of model order. The improved MOE is shown improve subsequent array processing tasks such as reducing the overhead needed for temporal smoothing, reducing the search space for signal association, and improving DoA estimation.en
dc.description.abstractgeneralRadio Frequency (RF) signals are used not only for wireless communication (its most well-known application), but is also commonly used to sense the environment. One specific application, localization and navigation, can require accuracy of 0.5 meters or below, which is a significant challenge indoors. To address this problem, we apply deep learning (a technique that has gains significant attention in recent years) to fuse types of RF signals, including signals and devices commonly used in smart phones (e.g., UWB, WiFi and IMUs). The result is a technique that can achieve 20cm accuracy in indoor location applications. In addition to localization, commercial WiFi signals can also be used to sense/determine human activity. The received signals from a WiFi transmitter contain sensing information about the environment, including geometric information (angles, distance and velocity) about objects. We specifically show that our proposed approach can successfully recognize human pose, whether or not a specific seat is occupied, and a person's location. Moreover, we show that this can be done with relatively little labelled data using a technique known as transfer learning. Finally, we apply the another neural network structure to solve a particular problem in multi-antenna processing, model order estimation in the presence of coherent multipath. The resulting system can deliver a 95% accuracy in complex environments greatly improving overall array processing.en
dc.description.degreeDoctor of Philosophyen
dc.format.mediumETDen
dc.identifier.othervt_gsexam:35386en
dc.identifier.urihttp://hdl.handle.net/10919/111944en
dc.language.isoenen
dc.publisherVirginia Techen
dc.rightsIn Copyrighten
dc.rights.urihttp://rightsstatements.org/vocab/InC/1.0/en
dc.subjectwifi sensingen
dc.subjectsensor fusionen
dc.titleWireless Sensing and Fusion using Deep Neural Networksen
dc.typeDissertationen
thesis.degree.disciplineElectrical Engineeringen
thesis.degree.grantorVirginia Polytechnic Institute and State Universityen
thesis.degree.leveldoctoralen
thesis.degree.nameDoctor of Philosophyen

Files

Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Yu_J_D_2022.pdf
Size:
14.46 MB
Format:
Adobe Portable Document Format