Publication | Open Access
REALM: Retrieval-Augmented Language Model Pre-Training
513
Citations
28
References
2020
Year
Llm Fine-tuningEngineeringMultilingual PretrainingLarge Language ModelCorpus LinguisticsText MiningNatural Language ProcessingInformation RetrievalData ScienceComputational LinguisticsWorld KnowledgeMasked Language ModelingLanguage StudiesLanguage ModelsMachine TranslationQuestion AnsweringLanguage Model Pre-trainingDeep LearningRetrieval Augmented GenerationLinguistics
Language model pre‑training captures world knowledge implicitly, but this requires ever‑larger networks to cover more facts. The goal is to augment language model pre‑training with a latent knowledge retriever for modular, interpretable knowledge access, and to demonstrate its effectiveness on open‑domain question answering. We pre‑train a latent knowledge retriever in an unsupervised way by back‑propagating through a retrieval step over millions of documents, guided by masked language modeling. REALM outperforms prior models on three open‑domain QA benchmarks by 4‑16% absolute accuracy and offers interpretability and modularity.
Language model pre-training has been shown to capture a surprising amount of world knowledge, crucial for NLP tasks such as question answering. However, this knowledge is stored implicitly in the parameters of a neural network, requiring ever-larger networks to cover more facts. To capture knowledge in a more modular and interpretable way, we augment language model pre-training with a latent knowledge retriever, which allows the model to retrieve and attend over documents from a large corpus such as Wikipedia, used during pre-training, fine-tuning and inference. For the first time, we show how to pre-train such a knowledge retriever in an unsupervised manner, using masked language modeling as the learning signal and backpropagating through a retrieval step that considers millions of documents. We demonstrate the effectiveness of Retrieval-Augmented Language Model pre-training (REALM) by fine-tuning on the challenging task of Open-domain Question Answering (Open-QA). We compare against state-of-the-art models for both explicit and implicit knowledge storage on three popular Open-QA benchmarks, and find that we outperform all previous methods by a significant margin (4-16% absolute accuracy), while also providing qualitative benefits such as interpretability and modularity.
| Year | Citations | |
|---|---|---|
Page 1
Page 1