Enhancing Trust in Autonomous Systems without Verifying Software

TR Number

Date

2019-06-12

Journal Title

Journal ISSN

Volume Title

Publisher

Virginia Tech

Abstract

The complexity of the software behind autonomous systems is rapidly growing, as are the applications of what they can do. It is not unusual for the lines of code to reach the millions, which adds to the verification challenge. The machine learning algorithms involved are often "black boxes" where the precise workings are not known by the developer applying them, and their behavior is undefined when encountering an untrained scenario. With so much code, the possibility of bugs or malicious code is considerable. An approach is developed to monitor and possibly override the behavior of autonomous systems independent of the software controlling them. Application-isolated safety monitors are implemented in configurable hardware to ensure that the behavior of an autonomous system is limited to what is intended. The sensor inputs may be shared with the software, but the output from the monitors is only engaged when the system violates its prescribed behavior. For each specific rule the system is expected to follow, a monitor is present processing the relevant sensor information. The behavior is defined in linear temporal logic (LTL) and the associated monitors are implemented in a field programmable gate array (FPGA). An off-the-shelf drone is used to demonstrate the effectiveness of the monitors without any physical modifications to the drone. Upon detection of a violation, appropriate corrective actions are persistently enforced on the autonomous system.

Description

Keywords

Autonomy, Runtime Verification, Field programmable gate arrays, Field Programmable Gate Array, Monitor, Formal Methods, UAS, Drone aircraft, Security, Linear Temporal Logic, LTL, High-Level Synthesis, HLS, monitor, model, checking, drone, malware, assurance, robotics, firmware, hardware

Citation

Collections