Publication | Closed Access
An Integer-Only and Group-Vector Systolic Accelerator for Efficiently Mapping Vision Transformer on Edge
52
Citations
21
References
2023
Year
EngineeringHardware AlgorithmComputer ArchitectureGroup-vector Systolic AcceleratorNatural Language ProcessingImage AnalysisData ScienceHigh-performance ArchitectureParallel ComputingEdge DetectionComputational GeometryVision SensorTransformer-like NetworkMachine VisionComputer EngineeringComputer ScienceDeep LearningFpga DesignComputer VisionHardware AccelerationImage ProcessorParallel Programming
Transformer-like network has shown remarkable high performance in both natural language processing and computer vision. However, the huge computational demands in non-linear floating-point arithmetic and the irregular memory access requirement in self-attention mechanism make it still a challenge to deploy Transformer on edge. To address the above issues, we propose integer-only quantization scheme for the simplification of non-linear operations (such as LayerNorm, Softmax and Gelu), meanwhile algorithm-hardware co-design strategy is applied to guarantee both the high accuracy and high efficiency. Besides, we construct general-purpose group vector systolic array to efficiently accelerate the matrix multiplication operations including both regular matrix-multiplication/convolution and the irregular multi-head self-attention mechanism. Unified data-package strategy and flexible on-/off-chip data storage management strategy are also proposed to further improve the performance. The design has been deployed on Xilinx ZCU102 FPGA platform, achieving an overall inference latency of 4.077ms and 11.15ms per image for ViT-tiny and ViT-s, respectively. The average throughput can reach as high as 762.7 GOPs, which shows significant improvement over the previous state-of-the-art FPGA Transformer accelerator.
| Year | Citations | |
|---|---|---|
Page 1
Page 1