Publication | Open Access
Maieutic Prompting: Logically Consistent Reasoning with Recursive Explanations
70
Citations
28
References
2022
Year
Unknown Venue
Artificial IntelligenceLlm Fine-tuningEngineeringSemanticsLarge Language ModelConsistent ReasoningNatural Language ProcessingComputational LinguisticsLanguage StudiesMachine TranslationPre-trained Language ModelsPlausible ReasoningCognitive ScienceQuestion AnsweringReasoning SystemReasoningExplanation-based LearningAutomated ReasoningLogical ReasoningLinguisticsExplainable AiMaieutic Prompting
Pre-trained language models (LMs) struggle with consistent reasoning; recently, prompting LMs to generate explanations that self-guide the inference has emerged as a promising direction to amend this. However, these approaches are fundamentally bounded by the correctness of explanations, which themselves are often noisy and inconsistent. In this work, we develop Maieutic Prompting, which aims to infer a correct answer to a question even from the unreliable generations of LM. Maieutic Prompting induces a tree of explanations abductively (e.g. X is true, because ...) and recursively, then frames the inference as a satisfiability problem over these explanations and their logical relations. We test Maieutic Prompting for true/false QA on three challenging benchmarks that require complex commonsense reasoning. Maieutic Prompting achieves up to 20% better accuracy than state-of-the-art prompting methods, and as a fully unsupervised approach, performs competitively with supervised models. We also show that Maieutic Prompting improves robustness in inference while providing interpretable rationales.
| Year | Citations | |
|---|---|---|
Page 1
Page 1