Publication | Open Access
Measuring and Narrowing the Compositionality Gap in Language Models
189
Citations
37
References
2023
Year
Unknown Venue
Artificial IntelligenceLlm Fine-tuningEngineeringLarge Language ModelNatural Language ProcessingSyntaxComputational LinguisticsLanguage StudiesLanguage ModelsMachine TranslationCognitive ScienceQuestion AnsweringCompositional ReasoningPrinciple Of CompositionalityCompositionalityCompositionality GapRetrieval Augmented GenerationAutomated ReasoningLinguisticsLanguage Generation
The study investigates language models’ ability to perform compositional reasoning tasks and introduces the self‑ask method to further improve this capability. The authors quantify a compositionality gap—the ratio of correctly answering sub‑problems but failing to produce the overall solution—by posing multi‑hop questions and propose the self‑ask method, where the model explicitly asks and answers follow‑up questions before giving the final answer. They find that larger GPT‑3 models improve single‑hop but not multi‑hop performance, leaving the compositionality gap unchanged, but that elicitive prompting such as chain‑of‑thought and the self‑ask method—especially when augmented with a search engine—significantly narrows the gap and boosts accuracy.
We investigate the ability of language models to perform compositional reasoning tasks where the overall solution depends on correctly composing the answers to sub-problems. We measure how often models can correctly answer all sub-problems but not generate the overall solution, a ratio we call the compositionality gap. We evaluate this ratio by asking multi-hop questions with answers that require composing multiple facts unlikely to have been observed together during pretraining. In the GPT-3 family of models, as model size increases we show that the single-hop question answering performance improves faster than the multi-hop performance does, therefore the compositionality gap does not decrease. This surprising result suggests that while more powerful models memorize and recall more factual knowledge, they show no corresponding improvement in their ability to perform this kind of compositional reasoning. We then demonstrate how elicitive prompting (such as chain of thought) narrows the compositionality gap by reasoning explicitly instead of implicitly. We present a new method, self-ask, that further improves on chain of thought. In our method, the model explicitly asks itself (and then answers) follow-up questions before answering the initial question. We finally show that self-ask's structured prompting lets us easily plug in a search engine to answer the follow-up questions, which additionally improves accuracy.
| Year | Citations | |
|---|---|---|
Page 1
Page 1