Backward Reasoning in LLMs: A Strategy for Identifying Irrelevant Context in Mathematical Word Problems
dc.contributor.author | Kumaran, Aishwarya | en |
dc.contributor.committeechair | Ramakrishnan, Narendran | en |
dc.contributor.committeemember | Wang, Xuan | en |
dc.contributor.committeemember | Huang, Lifu | en |
dc.contributor.department | Computer Science and#38; Applications | en |
dc.date.accessioned | 2025-06-04T08:01:45Z | en |
dc.date.available | 2025-06-04T08:01:45Z | en |
dc.date.issued | 2025-06-03 | en |
dc.description.abstract | We investigate backward reasoning in LLMs for identifying irrelevant context in mathematical word problems. Our evaluation across models from 1B to 70B parameters expose a sizeable performance gap between forward and backward reasoning: accuracy on the latter lags by 3-47%, with parameter-constrained models showing the largest drops. To mitigate this gap we propose five structured prompting methods: semantic-role abstraction, timeline reasoning, tree representations, contrastive prompting, and a unified process-supervision prompt. The unified scheme yields the most reliable gains, boosting backward-reasoning accuracy by almost 20%. In addition, simply rephrasing problems in plain language lifts performance by roughly 8 % across the board, underscoring the importance of task formulation. Strikingly, models can identify irrelevant facts with about 75 % accuracy when asked directly, yet fail to exploit this latent knowledge during problem solving, with accuracy plum- meting to 20% once distractors are embedded in the input. Leveraging this insight, we introduce a consistency-based variable-verification framework that activates hidden knowledge through backward reasoning, filters spurious context, and markedly strengthens robustness in realistic settings where misleading details are common. | en |
dc.description.abstractgeneral | We explore how AI language models solve math word problems, especially when the question includes extra, irrelevant details. We found that while these models are generally good at solving problems when given all the facts in the right order, they struggle much more when they have to work backwards from the final answer to an unknown starting value or when some information is unnecessary. This struggle is worse for smaller models, but even the large models show a clear gap. To help with this, we tested five different ways of guiding the models through the problem more clearly such as using structured language, breaking down the steps, or showing both wrong and right ways to solve a problem. The most effective method gave nearly a 20% improvement in accuracy. Surprisingly, just rewriting the questions in in a better format improved results by 8%, showing how much the wording matters. We also discovered that these models can often tell which facts are unimportant if asked directly. However, when those same facts are mixed into a real problem, the models tend to get confused. To fix this, we designed a method that helps the models double check their reasoning and ignore misleading information. This made them much more reliable at solving real world style problems where not everything in the question is useful. | en |
dc.description.degree | Master of Science | en |
dc.format.medium | ETD | en |
dc.identifier.other | vt_gsexam:44237 | en |
dc.identifier.uri | https://hdl.handle.net/10919/135022 | en |
dc.language.iso | en | en |
dc.publisher | Virginia Tech | en |
dc.rights | In Copyright | en |
dc.rights.uri | http://rightsstatements.org/vocab/InC/1.0/ | en |
dc.subject | Large Language Model | en |
dc.subject | Natural Language Processing | en |
dc.subject | Structured Prompting | en |
dc.subject | Mathematical Reasoning | en |
dc.title | Backward Reasoning in LLMs: A Strategy for Identifying Irrelevant Context in Mathematical Word Problems | en |
dc.type | Thesis | en |
thesis.degree.discipline | Computer Science & Applications | en |
thesis.degree.grantor | Virginia Polytechnic Institute and State University | en |
thesis.degree.level | masters | en |
thesis.degree.name | Master of Science | en |
Files
Original bundle
1 - 1 of 1