Building Trustworthy Artificial Intelligence of Things Systems in Adversarial Environments

Loading...
Thumbnail Image

Files

TR Number

Date

2025-08-13

Journal Title

Journal ISSN

Volume Title

Publisher

Virginia Tech

Abstract

The recent decade has witnessed a great explosion of artificial intelligence and the Internet of Things technologies. They have revolutionized people's daily lives, improving convenience, efficiency, and connectivity in previously unimaginable ways. Among all broad topics related to AI and IoT, the Artificial Intelligence of Things, abbreviated as AIoT, focuses specifically on the convergence of these two exciting technologies, combining AI's intelligence with IoT's connectivity and data-gathering capabilities. Until now, numerous AIoT applications have been developed, such as smart homes, autonomous driving, smart wearable devices, and automated medical diagnosis. In this dissertation, we investigate the critical security and privacy problems in AIoT systems. Because the AIoT system comprises two fundamental components, including IoT networks and AI algorithms, we naturally decompose our research into two parts: IoT network security and AI security. For IoT security, we explore new network attacks against the critical IoT network protocols and propose defense mechanisms to enhance the IoT infrastructure. In Chapter 2, we propose a novel network timing attack that desynchronizes and disables the chosen victim nodes in the IoT networks. Our attack compromises the precision time protocol, which is the de facto network timing protocol in time-sensitive IoT networks. In this chapter, we also introduce a defense mechanism based on network redundancy to prevent minority malicious nodes. In Chapter 3, we present the design of a trustworthy and verifiable spectrum sharing system leveraging blockchain technology. This system aims to defend against malicious participants and securely record their behaviors. We focus on spectrum sharing as it promotes more efficient utilization of spectrum resources, thereby enhancing the communication infrastructure of IoT networks. For AI security, we first focus on federated learning, or FL, a leading distributed learning paradigm built upon decentralized networks. We investigate privacy attacks against FL from a red team perspective to better understand and expose potential system vulnerabilities. We then explore adversarial attacks on multimodal diffusion models, motivated by the growing popularity of generative AI technologies. In Chapter 4, we introduce our customized model inversion attack against the medical FL systems. Our attack can reconstruct sensitive real-life COVID-19 X-ray images, brain tumor MRI images, and clinical text records, demonstrating its applicability and severity on practical medical systems. In Chapter 5, we present our novel model inversion attack named Scale-MIA against secure FL systems. This attack can reverse the shared model updates between the FL server and clients back to local training samples, challenging the fundamental privacy-preserving property of the FL systems. In Chapter 6, we introduce our novel adversarial attacks against multimodal diffusion models. Our attack adds customized imperceptible perturbations to the image prompts and can mislead the diffusion model from generating any attacker-chosen content, including NSFW content. We hope this work can offer insights into the fundamental security and privacy research of the AIoT systems.

Description

Keywords

Security and Privacy, Machine Learning, Internet of Things

Citation