Enhancing Trust in Autonomous Systems without Verifying Software
dc.contributor.author | Stamenkovich, Joseph Allan | en |
dc.contributor.committeechair | Patterson, Cameron D. | en |
dc.contributor.committeemember | Saad, Walid | en |
dc.contributor.committeemember | Huang, Bert | en |
dc.contributor.department | Electrical and Computer Engineering | en |
dc.date.accessioned | 2019-06-13T08:00:38Z | en |
dc.date.available | 2019-06-13T08:00:38Z | en |
dc.date.issued | 2019-06-12 | en |
dc.description.abstract | The complexity of the software behind autonomous systems is rapidly growing, as are the applications of what they can do. It is not unusual for the lines of code to reach the millions, which adds to the verification challenge. The machine learning algorithms involved are often "black boxes" where the precise workings are not known by the developer applying them, and their behavior is undefined when encountering an untrained scenario. With so much code, the possibility of bugs or malicious code is considerable. An approach is developed to monitor and possibly override the behavior of autonomous systems independent of the software controlling them. Application-isolated safety monitors are implemented in configurable hardware to ensure that the behavior of an autonomous system is limited to what is intended. The sensor inputs may be shared with the software, but the output from the monitors is only engaged when the system violates its prescribed behavior. For each specific rule the system is expected to follow, a monitor is present processing the relevant sensor information. The behavior is defined in linear temporal logic (LTL) and the associated monitors are implemented in a field programmable gate array (FPGA). An off-the-shelf drone is used to demonstrate the effectiveness of the monitors without any physical modifications to the drone. Upon detection of a violation, appropriate corrective actions are persistently enforced on the autonomous system. | en |
dc.description.abstractgeneral | Autonomous systems are surprisingly vulnerable, not just from malicious hackers, but from design errors and oversights. The lines of code required can quickly climb into the millions, and the artificial decision algorithms can be inscrutable and fully dependent upon the information they are trained on. These factors cause the verification of the core software running our autonomous cars, drones, and everything else to be prohibitively difficult by traditional means. Independent safety monitors are implemented to provide internal oversight for these autonomous systems. A semi-automatic design process efficiently creates error-free monitors from safety rules drones need to follow. These monitors remain separate and isolated from the software typically controlling the system, but use the same sensor information. They are embedded in the circuitry and act as their own small, task-specific processors watching to make sure a particular rule is not violated; otherwise, they take control of the system and force corrective behavior. The monitors are added to a consumer off-the-shelf (COTS) drone to demonstrate their effectiveness. For every rule monitored, an override is triggered when they are violated. Their effectiveness depends on reliable sensor information as with any electronic component, and the completeness of the rules detailing these monitors. | en |
dc.description.degree | Master of Science | en |
dc.format.medium | ETD | en |
dc.identifier.other | vt_gsexam:20441 | en |
dc.identifier.uri | http://hdl.handle.net/10919/89950 | en |
dc.publisher | Virginia Tech | en |
dc.rights | In Copyright | en |
dc.rights.uri | http://rightsstatements.org/vocab/InC/1.0/ | en |
dc.subject | Autonomy | en |
dc.subject | Runtime Verification | en |
dc.subject | Field programmable gate arrays | en |
dc.subject | Field Programmable Gate Array | en |
dc.subject | Monitor | en |
dc.subject | Formal Methods | en |
dc.subject | UAS | en |
dc.subject | Drone aircraft | en |
dc.subject | Security | en |
dc.subject | Linear Temporal Logic | en |
dc.subject | LTL | en |
dc.subject | High-Level Synthesis | en |
dc.subject | HLS | en |
dc.subject | monitor | en |
dc.subject | model | en |
dc.subject | checking | en |
dc.subject | drone | en |
dc.subject | malware | en |
dc.subject | assurance | en |
dc.subject | robotics | en |
dc.subject | firmware | en |
dc.subject | hardware | en |
dc.title | Enhancing Trust in Autonomous Systems without Verifying Software | en |
dc.type | Thesis | en |
thesis.degree.discipline | Computer Engineering | en |
thesis.degree.grantor | Virginia Polytechnic Institute and State University | en |
thesis.degree.level | masters | en |
thesis.degree.name | Master of Science | en |
Files
Original bundle
1 - 1 of 1