Publication | Open Access
Improving and Simplifying Pattern Exploiting Training
33
Citations
14
References
2021
Year
Recently, pre-trained language models (LMs) have achieved strong performance when finetuned on difficult benchmarks like Super-GLUE. However, performance can suffer when there are very few labeled examples available for fine-tuning. Pattern Exploiting Training (PET) is a recent approach that leverages patterns for few-shot learning. However, PET uses task-specific unlabeled data. In this paper, we focus on few shot learning without any unlabeled data and introduce ADAPET, which modifies PET's objective to provide denser supervision during fine-tuning. As a result, ADAPET outperforms PET on Su-perGLUE without any task-specific unlabeled data.
| Year | Citations | |
|---|---|---|
Page 1
Page 1