Publication | Closed Access
What does a platypus look like? Generating customized prompts for zero-shot image classification
153
Citations
44
References
2023
Year
Unknown Venue
EngineeringMachine LearningOpen-vocabulary ModelsNatural Language ProcessingLarge Language ModelsMultimodal LlmImage ClassificationImage AnalysisZero-shot LearningVisual GroundingPattern RecognitionComputational LinguisticsZero-shot Image ClassificationVisual Question AnsweringPlatypus LookLanguage ModelsMachine TranslationCustomized PromptsVision Language ModelComputer ScienceComputer Vision
Open-vocabulary models are a promising new paradigm for image classification. Unlike traditional classification models, open-vocabulary models classify among any arbitrary set of categories specified with natural language during inference. This natural language, called "prompts", typically consists of a set of hand-written templates (e.g., "a photo of a {}") which are completed with each of the category names. This work introduces a simple method to generate higher accuracy prompts, without relying on any explicit knowledge of the task domain and with far fewer hand-constructed sentences. To achieve this, we combine open-vocabulary models with large language models (LLMs) to create Customized Prompts via Language models (CuPL, pronounced "couple"). In particular, we leverage the knowledge contained in LLMs in order to generate many descriptive sentences that contain important discriminating characteristics of the image categories. This allows the model to place a greater importance on these regions in the image when making predictions. We find that this straightforward and general approach improves accuracy on a range of zero-shot image classification benchmarks, including over one percentage point gain on ImageNet. Finally, this simple baseline requires no additional training and remains completely zero-shot. Code available at https://github.com/sarahpratt/CuPL.
| Year | Citations | |
|---|---|---|
Page 1
Page 1