Vision-Enhanced Communications: On the Benefits of NLOS/LOS Knowledge in Wireless Systems
Files
TR Number
Date
Authors
Journal Title
Journal ISSN
Volume Title
Publisher
Abstract
The proliferation of Internet of Things (IoT) devices equipped with meteorological, auditory, optical, and infrared sensors has opened the door to integrating sensor-based information into existing physical layer communication system design. Many properties of the wireless channel, especially for mobile applications, are highly dynamic and easily observable using non-radio frequency (RF) sensors or RF sensors operating out-of-band (OOB) which we refer to as vision sensors. Using vision sensors to provide information about one such property, that of line-of-sight (LOS)/non-line-ofsight (NLOS) states, is the central focus of this work. A generalized signal detection framework is presented for a vision sensor-aided receiver operating in a binary continuous-time Markov chain (CTMC) channel environment wherein the NLOS/LOS state toggles intermittently. Several cases are explored wherein varying degrees of NLOS/LOS knowledge are available at the receiver with an emphasis on labeled vs unlabeled information. Bayes risk and composite likelihood ratio test (LRT) methods are used to derive the optimal decision rule in both constant false-alarm rate (CFAR) and minimum probability of error (min(Pe)) paradigms. It is shown that a dynamic detection scheme utilizing labeled information, including imperfect error-prone labels, sourced from vision sensors can improve upon the uniformly-most-powerful (UMP) test in an ensemble of trials, yielding higher CFAR detection rates than static detectors without vision sensors. Further, it is shown that unlabeled information, while matching the CFAR performance of the UMP test, can yield a lower overall error rate compared to a blind receiver with no NLOS/LOS knowledge.