Publication | Open Access
Visual Instruction Tuning
668
Citations
0
References
2023
Year
EngineeringMachine LearningVisual Instruction TuningMultimodal LearningLarge Language ModelsNatural Language ProcessingMultimodal LlmComputational LinguisticsMachine TranslationLarge Ai ModelCognitive ScienceLanguage-only Gpt-4Vision Language ModelPerceptual User InterfaceComputer ScienceMultimodal TranslationDeep LearningComputer VisionVisual FunctionVisual ReasoningMultimodal Gpt-4Eye TrackingLarge Language
Instruction tuning LLMs with machine‑generated data boosts zero‑shot performance, yet this approach remains underexplored for multimodal models. This study introduces the first use of GPT‑4 to generate multimodal instruction‑following data for vision‑language models. We train LLaVA, an end‑to‑end multimodal model that couples a vision encoder with GPT‑4‑derived instruction data, enabling general visual‑language understanding. LLaVA achieves an 85.1 % relative score versus GPT‑4 on a synthetic dataset, reaches 92.53 % accuracy on Science QA when fine‑tuned with GPT‑4, and the authors release the data, model, and code publicly.
Instruction tuning large language models (LLMs) using machine-generated instruction-following data has improved zero-shot capabilities on new tasks, but the idea is less explored in the multimodal field. In this paper, we present the first attempt to use language-only GPT-4 to generate multimodal language-image instruction-following data. By instruction tuning on such generated data, we introduce LLaVA: Large Language and Vision Assistant, an end-to-end trained large multimodal model that connects a vision encoder and LLM for general-purpose visual and language understanding.Our early experiments show that LLaVA demonstrates impressive multimodel chat abilities, sometimes exhibiting the behaviors of multimodal GPT-4 on unseen images/instructions, and yields a 85.1% relative score compared with GPT-4 on a synthetic multimodal instruction-following dataset. When fine-tuned on Science QA, the synergy of LLaVA and GPT-4 achieves a new state-of-the-art accuracy of 92.53%. We make GPT-4 generated visual instruction tuning data, our model and code base publicly available.