Explainable Neural Claim Verification Using Rationalization
dc.contributor.author | Gurrapu, Sai Charan | en |
dc.contributor.committeechair | Batarseh, Feras A. | en |
dc.contributor.committeechair | Huang, Lifu | en |
dc.contributor.committeemember | Freeman, Laura J. | en |
dc.contributor.committeemember | Lourentzou, Ismini | en |
dc.contributor.department | Computer Science | en |
dc.date.accessioned | 2022-06-16T08:00:16Z | en |
dc.date.available | 2022-06-16T08:00:16Z | en |
dc.date.issued | 2022-06-15 | en |
dc.description.abstract | The dependence on Natural Language Processing (NLP) systems has grown significantly in the last decade. Recent advances in deep learning have enabled language models to generate high-quality text at the same level as human-written text. If this growth continues, it can potentially lead to increased misinformation, which is a significant challenge. Although claim verification techniques exist, they lack proper explainability. Numerical scores such as Attention and Lime and visualization techniques such as saliency heat maps are insufficient because they require specialized knowledge. It is inaccessible and challenging for the nonexpert to understand black-box NLP systems. We propose a novel approach called, ExClaim for explainable claim verification using NLP rationalization. We demonstrate that our approach can predict a verdict for the claim but also justify and rationalize its output as a natural language explanation (NLE). We extensively evaluate the system using statistical and Explainable AI (XAI) metrics to ensure the outcomes are valid, verified, and trustworthy to help reinforce the human-AI trust. We propose a new subfield in XAI called Rational AI (RAI) to improve research progress on rationalization and NLE-based explainability techniques. Ensuring that claim verification systems are assured and explainable is a step towards trustworthy AI systems and ultimately helps mitigate misinformation. | en |
dc.description.abstractgeneral | The dependence on Natural Language Processing (NLP) systems has grown significantly in the last decade. Recent advances in deep learning have enabled text generation models to generate high-quality text that is at the same level as human-written text. If this growth continues, it can potentially lead to increased misinformation, which is a major societal challenge. Although claim verification techniques exist, they lack proper explainability. It is difficult for the average user to understand the model's decision-making process. Numerical scores and visualization techniques exist to provide explainability, but they are insufficient because they require specialized domain knowledge. This makes it inaccessible and challenging for the nonexpert to understand black-box NLP systems. We propose a novel approach called, ExClaim for explainable claim verification using NLP rationalization. We demonstrate that our approach can predict a verdict for the claim but also justify and rationalize its output as a natural language explanation (NLE). We extensively evaluate the system using statistical and Explainable AI (XAI) metrics to ensure the outcomes are valid, verified, and trustworthy to help reinforce the human-AI trust. We propose a new subfield in XAI called Rational AI (RAI) to improve research progress on rationalization and NLE-based explainability techniques. Ensuring that claim verification systems are assured and explainable is a step towards trustworthy AI systems and ultimately helps mitigate misinformation. | en |
dc.description.degree | Master of Science | en |
dc.format.medium | ETD | en |
dc.identifier.other | vt_gsexam:34983 | en |
dc.identifier.uri | http://hdl.handle.net/10919/110790 | en |
dc.language.iso | en | en |
dc.publisher | Virginia Tech | en |
dc.rights | Creative Commons Attribution-ShareAlike 4.0 International | en |
dc.rights.uri | http://creativecommons.org/licenses/by-sa/4.0/ | en |
dc.subject | Claim Verification | en |
dc.subject | Rationalization | en |
dc.subject | Explainability | en |
dc.subject | NLP Assurance | en |
dc.subject | Rational AI | en |
dc.subject | Rationality | en |
dc.subject | Misinformation Detection | en |
dc.title | Explainable Neural Claim Verification Using Rationalization | en |
dc.type | Thesis | en |
thesis.degree.discipline | Computer Science and Applications | en |
thesis.degree.grantor | Virginia Polytechnic Institute and State University | en |
thesis.degree.level | masters | en |
thesis.degree.name | Master of Science | en |
Files
Original bundle
1 - 1 of 1