Explainable Neural Claim Verification Using Rationalization

dc.contributor.authorGurrapu, Sai Charanen
dc.contributor.committeechairBatarseh, Feras A.en
dc.contributor.committeechairHuang, Lifuen
dc.contributor.committeememberFreeman, Laura J.en
dc.contributor.committeememberLourentzou, Isminien
dc.contributor.departmentComputer Scienceen
dc.date.accessioned2022-06-16T08:00:16Zen
dc.date.available2022-06-16T08:00:16Zen
dc.date.issued2022-06-15en
dc.description.abstractThe dependence on Natural Language Processing (NLP) systems has grown significantly in the last decade. Recent advances in deep learning have enabled language models to generate high-quality text at the same level as human-written text. If this growth continues, it can potentially lead to increased misinformation, which is a significant challenge. Although claim verification techniques exist, they lack proper explainability. Numerical scores such as Attention and Lime and visualization techniques such as saliency heat maps are insufficient because they require specialized knowledge. It is inaccessible and challenging for the nonexpert to understand black-box NLP systems. We propose a novel approach called, ExClaim for explainable claim verification using NLP rationalization. We demonstrate that our approach can predict a verdict for the claim but also justify and rationalize its output as a natural language explanation (NLE). We extensively evaluate the system using statistical and Explainable AI (XAI) metrics to ensure the outcomes are valid, verified, and trustworthy to help reinforce the human-AI trust. We propose a new subfield in XAI called Rational AI (RAI) to improve research progress on rationalization and NLE-based explainability techniques. Ensuring that claim verification systems are assured and explainable is a step towards trustworthy AI systems and ultimately helps mitigate misinformation.en
dc.description.abstractgeneralThe dependence on Natural Language Processing (NLP) systems has grown significantly in the last decade. Recent advances in deep learning have enabled text generation models to generate high-quality text that is at the same level as human-written text. If this growth continues, it can potentially lead to increased misinformation, which is a major societal challenge. Although claim verification techniques exist, they lack proper explainability. It is difficult for the average user to understand the model's decision-making process. Numerical scores and visualization techniques exist to provide explainability, but they are insufficient because they require specialized domain knowledge. This makes it inaccessible and challenging for the nonexpert to understand black-box NLP systems. We propose a novel approach called, ExClaim for explainable claim verification using NLP rationalization. We demonstrate that our approach can predict a verdict for the claim but also justify and rationalize its output as a natural language explanation (NLE). We extensively evaluate the system using statistical and Explainable AI (XAI) metrics to ensure the outcomes are valid, verified, and trustworthy to help reinforce the human-AI trust. We propose a new subfield in XAI called Rational AI (RAI) to improve research progress on rationalization and NLE-based explainability techniques. Ensuring that claim verification systems are assured and explainable is a step towards trustworthy AI systems and ultimately helps mitigate misinformation.en
dc.description.degreeMaster of Scienceen
dc.format.mediumETDen
dc.identifier.othervt_gsexam:34983en
dc.identifier.urihttp://hdl.handle.net/10919/110790en
dc.language.isoenen
dc.publisherVirginia Techen
dc.rightsCreative Commons Attribution-ShareAlike 4.0 Internationalen
dc.rights.urihttp://creativecommons.org/licenses/by-sa/4.0/en
dc.subjectClaim Verificationen
dc.subjectRationalizationen
dc.subjectExplainabilityen
dc.subjectNLP Assuranceen
dc.subjectRational AIen
dc.subjectRationalityen
dc.subjectMisinformation Detectionen
dc.titleExplainable Neural Claim Verification Using Rationalizationen
dc.typeThesisen
thesis.degree.disciplineComputer Science and Applicationsen
thesis.degree.grantorVirginia Polytechnic Institute and State Universityen
thesis.degree.levelmastersen
thesis.degree.nameMaster of Scienceen

Files

Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Gurrapu_SC_T_2022.pdf
Size:
1.79 MB
Format:
Adobe Portable Document Format

Collections