Publication | Open Access
P-Tuning: Prompt Tuning Can Be Comparable to Fine-tuning Across Scales and Tasks
680
Citations
21
References
2022
Year
Unknown Venue
Prompt tuning reduces per‑task storage and memory usage by tuning only continuous prompts while keeping the language model frozen, yet prior studies show it performs poorly on normal‑sized pretrained models for NLU tasks. P‑Tuning v2 implements Deep Prompt Tuning, optimized and adapted for NLU tasks. We show that properly optimized prompt tuning, as in P‑Tuning v2, is universally effective across model scales and NLU tasks, matching fine‑tuning performance while tuning only 0.1–3 % of parameters, and can serve as a strong baseline.
Prompt tuning, which only tunes continuous prompts with a frozen language model, substantially reduces per-task storage and memory usage at training. However, in the context of NLU, prior work reveals that prompt tuning does not perform well for normal-sized pretrained models. We also find that existing methods of prompt tuning cannot handle hard sequence labeling tasks, indicating a lack of universality. We present a novel empirical finding that properly optimized prompt tuning can be universally effective across a wide range of model scales and NLU tasks. It matches the performance of finetuning while having only 0.1%-3% tuned parameters. Our method P-Tuning v2 is an implementation of Deep Prompt Tuning (CITATION) optimized and adapted for NLU. Given the universality and simplicity of P-Tuning v2, we believe it can serve as an alternative to finetuning and a strong baseline for future research.
| Year | Citations | |
|---|---|---|
Page 1
Page 1