Publication | Open Access
Fantastically Ordered Prompts and Where to Find Them: Overcoming Few-Shot Prompt Order Sensitivity
119
Citations
0
References
2021
Year
Llm Fine-tuningEngineeringMachine LearningCommunicationMultilingual PretrainingLarge Language ModelCorpus LinguisticsCandidate PermutationsText MiningNatural Language ProcessingLarge Language ModelsInformation RetrievalData ScienceComputational LinguisticsLanguage StudiesLanguage ModelsMachine TranslationLarge Ai ModelComputer ScienceRetrieval Augmented GenerationGpt-family ModelsOrdered PromptsLinguistics
Few-shot prompting with large pretrained language models can achieve competitive results, but relying on a development set to select permutations would deviate from the true few-shot setting. The study investigates how the order of few-shot examples influences performance, showing that some permutations yield near state‑of‑the‑art results while others perform at chance. The authors generate an artificial development set using the model’s own outputs and rank candidate permutations by entropy statistics to identify high‑performance prompts. Analysis reveals that prompt‑order sensitivity exists across model sizes, is not tied to specific samples, and is not transferable between models, and the proposed method achieves a 13 % relative improvement on eleven text‑classification tasks.
When primed with only a handful of training samples, very large, pretrained language models such as GPT-3 have shown competitive results when compared to fully-supervised, fine-tuned, large, pretrained language models. We demonstrate that the order in which the samples are provided can make the difference between near state-of-the-art and random guess performance: essentially some permutations are “fantastic” and some not. We analyse this phenomenon in detail, establishing that: it is present across model sizes (even for the largest current models), it is not related to a specific subset of samples, and that a given good permutation for one model is not transferable to another. While one could use a development set to determine which permutations are performant, this would deviate from the true few-shot setting as it requires additional annotated data. Instead, we use the generative nature of language models to construct an artificial development set and based on entropy statistics of the candidate permutations on this set, we identify performant prompts. Our method yields a 13% relative improvement for GPT-family models across eleven different established text classification tasks.