Concepedia

TLDR

Finetuning language models on instruction‑style datasets improves performance and generalization to unseen tasks. The study investigates how scaling the number of tasks, model size, and chain‑of‑thought finetuning affect instruction‑finetuned language models. Experiments vary the number of tasks, model size, and chain‑of‑thought data to assess instruction finetuning. Scaling tasks, model size, and chain‑of‑thought finetuning yields dramatic performance gains across PaLM, T5, and U‑PaLM models, with Flan‑PaLM 540B achieving state‑of‑the‑art five‑shot MMLU (75.2%) and outperforming baseline PaLM 540B by an average of 9.4%, while Flan‑T5 checkpoints deliver strong few‑shot results comparable to larger models.

Abstract

Finetuning language models on a collection of datasets phrased as instructions has been shown to improve model performance and generalization to unseen tasks. In this paper we explore instruction finetuning with a particular focus on (1) scaling the number of tasks, (2) scaling the model size, and (3) finetuning on chain-of-thought data. We find that instruction finetuning with the above aspects dramatically improves performance on a variety of model classes (PaLM, T5, U-PaLM), prompting setups (zero-shot, few-shot, CoT), and evaluation benchmarks (MMLU, BBH, TyDiQA, MGSM, open-ended generation). For instance, Flan-PaLM 540B instruction-finetuned on 1.8K tasks outperforms PALM 540B by a large margin (+9.4% on average). Flan-PaLM 540B achieves state-of-the-art performance on several benchmarks, such as 75.2% on five-shot MMLU. We also publicly release Flan-T5 checkpoints, which achieve strong few-shot performance even compared to much larger models, such as PaLM 62B. Overall, instruction finetuning is a general method for improving the performance and usability of pretrained language models.