Publication | Open Access
Learning to Compose Soft Prompts for Compositional Zero-Shot Learning
41
Citations
0
References
2022
Year
Artificial IntelligenceFew-shot LearningCompose Soft PromptsEngineeringMachine LearningSoft Prompting MethodNatural Language ProcessingMultimodal LlmZero-shot LearningVisual GroundingPattern RecognitionComputational LinguisticsRobot LearningLanguage StudiesYoung TigerMachine TranslationCompositional Soft PromptingVision Language ModelComputer ScienceCompositionalityDeep LearningComputer VisionLinguistics
We introduce compositional soft prompting (CSP), a parameter-efficient learning technique to improve the zero-shot compositionality of large-scale pretrained vision-language models (VLMs) like CLIP. We develop CSP for compositional zero-shot learning, the task of predicting unseen attribute-object compositions (e.g., old cat and young tiger). VLMs have a flexible text encoder that can represent arbitrary classes as natural language prompts but they often underperform task-specific architectures on the compositional zero-shot benchmark datasets. CSP treats the attributes and objects that define classes as learnable tokens of vocabulary. During training, the vocabulary is tuned to recognize classes that compose tokens in multiple ways (e.g., old cat and white cat). At test time, we recompose the learned attribute-object vocabulary in new combinations to recognize novel classes. We show that CSP outperforms the CLIP on benchmark datasets by an average of 10.9 percentage points on AUC. CSP also outperforms CoOp, a soft prompting method that fine-tunes the prefix context tokens, by an average of 5.8 percentage points on AUC. We perform additional experiments to show that CSP improves generalization to higher-order attribute-attribute-object compositions (e.g., old white cat) and combinations of pretrained attributes and fine-tuned objects. The code is available at https://github.com/BatsResearch/csp.