Concepedia

TLDR

The study seeks the compute‑optimal trade‑off between model size and training tokens for transformer language models. The authors trained 400 models ranging from 70 M to 16 B parameters on 5–500 B tokens, showing that model size and token count should scale equally, and validated this by training Chinchilla with 70 B parameters and four times more data than Gopher. They found that current large models are undertrained, and that a compute‑optimal model like Chinchilla—70 B parameters with four times more data—outperforms larger models, uses less compute for fine‑tuning and inference, and achieves 67.5 % on MMLU, a 7 % improvement over Gopher.

Abstract

We investigate the optimal model size and number of tokens for training a transformer language model under a given compute budget. We find that current large language models are significantly undertrained, a consequence of the recent focus on scaling language models whilst keeping the amount of training data constant. By training over 400 language models ranging from 70 million to over 16 billion parameters on 5 to 500 billion tokens, we find that for compute-optimal training, the model size and the number of training tokens should be scaled equally: for every doubling of model size the number of training tokens should also be doubled. We test this hypothesis by training a predicted compute-optimal model, Chinchilla, that uses the same compute budget as Gopher but with 70B parameters and 4$\times$ more more data. Chinchilla uniformly and significantly outperforms Gopher (280B), GPT-3 (175B), Jurassic-1 (178B), and Megatron-Turing NLG (530B) on a large range of downstream evaluation tasks. This also means that Chinchilla uses substantially less compute for fine-tuning and inference, greatly facilitating downstream usage. As a highlight, Chinchilla reaches a state-of-the-art average accuracy of 67.5% on the MMLU benchmark, greater than a 7% improvement over Gopher.