Concepedia

TLDR

Transformer models dominate NLP but their use in computer vision is limited, typically integrated with or replacing parts of convolutional networks. The study demonstrates that a pure transformer applied directly to image patch sequences can achieve strong image classification performance without relying on CNNs. Vision Transformer, pre‑trained on large datasets and fine‑tuned on benchmarks such as ImageNet, CIFAR‑100, and VTAB, achieves state‑of‑the‑art classification accuracy while using far fewer training resources than convolutional networks.

Abstract

While the Transformer architecture has become the de-facto standard for natural language processing tasks, its applications to computer vision remain limited. In vision, attention is either applied in conjunction with convolutional networks, or used to replace certain components of convolutional networks while keeping their overall structure in place. We show that this reliance on CNNs is not necessary and a pure transformer applied directly to sequences of image patches can perform very well on image classification tasks. When pre-trained on large amounts of data and transferred to multiple mid-sized or small image recognition benchmarks (ImageNet, CIFAR-100, VTAB, etc.), Vision Transformer (ViT) attains excellent results compared to state-of-the-art convolutional networks while requiring substantially fewer computational resources to train.

References

YearCitations

2016

214.9K

2017

75.5K

2009

60.2K

2024

15.6K

1989

11.6K

2020

11.6K

2018

11K

2025

6.5K

2008

3.1K

2020

3K

Page 1