Machine Learning Model Watermarking through DRAM PUFs

TR Number

Date

2025-05-22

Journal Title

Journal ISSN

Volume Title

Publisher

Virginia Tech

Abstract

In the modern day, neural networks are of utmost importance, and their applications can be found across a wide range of areas including social media, healthcare, navigation, and personal assistance. Modern neural networks are large-scale and contain billions of parameters. Hence, training these networks is a costly affair, both in terms of resources and finances. With the rising cost of training, security concerns over model theft have also emerged, where an adversarial party may replicate a pre-trained model without proper authorization and deploy it for their advantage. Watermarking serves as a tool that, in such scenarios, allows the legitimate owner to claim the authenticity of the stolen model. Researchers have developed various watermarking schemes for neural networks, typically by modifying the training code. In this thesis, I worked on developing a hardware-based watermarking scheme utilizing the PUF (Physical Unclonable Function) characteristics of DRAM modules. PUFs can work as strong hardware-based security fingerprints, and DRAMs have been shown to exhibit inherent PUF behavior. One way to generate a PUF from DRAM is by disabling the DRAM refresh mechanism, which causes bit-flips in the stored charge. In my work, a machine learning model is trained on a PUF-enabled DRAM platform where the model parameters are stored directly on the decaying DRAM cells. This process integrates the DRAM's PUF into the model parameters, and enables embedding of a robust watermark without making any modifications to the training code.

Description

Keywords

Neural-Network, DRAM, Watermark, FPGA, Hardware Acceleration

Citation

Collections