Virginia Tech National Security Institute
Permanent URI for this community
Browse
Browsing Virginia Tech National Security Institute by Author "Adams, Stephen"
Now showing 1 - 2 of 2
Results Per Page
Sort Options
- Deep-Learning-Based Digitization of Protein-Self-Assembly to Print Biodegradable Physically Unclonable Labels for Device SecurityPradhan, Sayantan; Rajagopala, Abhi D.; Meno, Emma; Adams, Stephen; Elks, Carl R.; Beling, Peter A.; Yadavalli, Vamsi K. (MDPI, 2023-08-28)The increasingly pervasive problem of counterfeiting affects both individuals and industry. In particular, public health and medical fields face threats to device authenticity and patient privacy, especially in the post-pandemic era. Physical unclonable functions (PUFs) present a modern solution using counterfeit-proof security labels to securely authenticate and identify physical objects. PUFs harness innately entropic information generators to create a unique fingerprint for an authentication protocol. This paper proposes a facile protein self-assembly process as an entropy generator for a unique biological PUF. The posited image digitization process applies a deep learning model to extract a feature vector from the self-assembly image. This is then binarized and debiased to produce a cryptographic key. The NIST SP 800-22 Statistical Test Suite was used to evaluate the randomness of the generated keys, which proved sufficiently stochastic. To facilitate deployment on physical objects, the PUF images were printed on flexible silk-fibroin-based biodegradable labels using functional protein bioinks. Images from the labels were captured using a cellphone camera and referenced against the source image for error rate comparison. The deep-learning-based biological PUF has potential as a low-cost, scalable, highly randomized strategy for anti-counterfeiting technology.
- A survey of inverse reinforcement learningAdams, Stephen; Cody, Tyler; Beling, Peter A. (Springer, 2022-08)Learning from demonstration, or imitation learning, is the process of learning to act in an environment from examples provided by a teacher. Inverse reinforcement learning (IRL) is a specific form of learning from demonstration that attempts to estimate the reward function of a Markov decision process from examples provided by the teacher. The reward function is often considered the most succinct description of a task. In simple applications, the reward function may be known or easily derived from properties of the system and hard coded into the learning process. However, in complex applications, this may not be possible, and it may be easier to learn the reward function by observing the actions of the teacher. This paper provides a comprehensive survey of the literature on IRL. This survey outlines the differences between IRL and two similar methods - apprenticeship learning and inverse optimal control. Further, this survey organizes the IRL literature based on the principal method, describes applications of IRL algorithms, and provides areas of future research.