Publication | Open Access
XLNet: Generalized Autoregressive Pretraining for Language Understanding
1.9K
Citations
34
References
2019
Year
Llm Fine-tuningEngineeringMachine LearningMultilingual PretrainingLarge Language ModelCorpus LinguisticsSentiment AnalysisLanguage UnderstandingSpeech RecognitionNatural Language ProcessingAutoregressive Language ModelingData ScienceMasked PositionsComputational LinguisticsLanguage StudiesMachine TranslationLarge Ai ModelDeep LearningRetrieval Augmented GenerationLinguistics
BERT’s denoising autoencoding pretraining models bidirectional contexts better than autoregressive approaches but ignores dependencies among masked tokens, creating a pretrain‑finetune mismatch. XLNet is proposed as a generalized autoregressive pretraining method that learns bidirectional contexts by maximizing expected likelihood over all permutations of factorization order while addressing BERT’s limitations. It incorporates Transformer‑XL’s recurrence and relative positional encoding into the pretraining objective. Across 20 benchmark tasks, XLNet consistently surpasses BERT, often by a large margin, on question answering, natural language inference, sentiment analysis, and document ranking.
With the capability of modeling bidirectional contexts, denoising autoencoding based pretraining like BERT achieves better performance than pretraining approaches based on autoregressive language modeling. However, relying on corrupting the input with masks, BERT neglects dependency between the masked positions and suffers from a pretrain-finetune discrepancy. In light of these pros and cons, we propose XLNet, a generalized autoregressive pretraining method that (1) enables learning bidirectional contexts by maximizing the expected likelihood over all permutations of the factorization order and (2) overcomes the limitations of BERT thanks to its autoregressive formulation. Furthermore, XLNet integrates ideas from Transformer-XL, the state-of-the-art autoregressive model, into pretraining. Empirically, under comparable experiment settings, XLNet outperforms BERT on 20 tasks, often by a large margin, including question answering, natural language inference, sentiment analysis, and document ranking.
| Year | Citations | |
|---|---|---|
Page 1
Page 1