Backward Reasoning in LLMs: A Strategy for Identifying Irrelevant Context in Mathematical Word Problems
Files
TR Number
Date
Authors
Journal Title
Journal ISSN
Volume Title
Publisher
Abstract
We investigate backward reasoning in LLMs for identifying irrelevant context in mathematical word problems. Our evaluation across models from 1B to 70B parameters expose a sizeable performance gap between forward and backward reasoning: accuracy on the latter lags by 3-47%, with parameter-constrained models showing the largest drops. To mitigate this gap we propose five structured prompting methods: semantic-role abstraction, timeline reasoning, tree representations, contrastive prompting, and a unified process-supervision prompt. The unified scheme yields the most reliable gains, boosting backward-reasoning accuracy by almost 20%. In addition, simply rephrasing problems in plain language lifts performance by roughly 8 % across the board, underscoring the importance of task formulation. Strikingly, models can identify irrelevant facts with about 75 % accuracy when asked directly, yet fail to exploit this latent knowledge during problem solving, with accuracy plum- meting to 20% once distractors are embedded in the input. Leveraging this insight, we introduce a consistency-based variable-verification framework that activates hidden knowledge through backward reasoning, filters spurious context, and markedly strengthens robustness in realistic settings where misleading details are common.