Concepedia

Publication | Closed Access

Accelerating CUDA graph algorithms at maximum warp

336

Citations

18

References

2011

Year

TLDR

Graphs are widely used data structures, and while GPUs can accelerate graph problems, irregular real‑world graphs cause severe performance degradation. The study aims to address GPU performance issues on irregular graphs by proposing a virtual warp‑centric programming method that aligns the programming model with GPU architecture. The authors introduce a virtual warp‑centric programming method that exposes GPU architectural traits to users. The method delivers up to 9× speedup over prior GPU algorithms and 12× over single‑thread CPU on irregular graphs, 30% improvement on regular graphs, 1.3×–15.1× speedup on benchmark applications, and confirms that the GPU–CPU performance gap is mainly due to memory bandwidth differences.

Abstract

Graphs are powerful data representations favored in many computational domains. Modern GPUs have recently shown promising results in accelerating computationally challenging graph problems but their performance suffered heavily when the graph structure is highly irregular, as most real-world graphs tend to be. In this study, we first observe that the poor performance is caused by work imbalance and is an artifact of a discrepancy between the GPU programming model and the underlying GPU architecture.We then propose a novel virtual warp-centric programming method that exposes the traits of underlying GPU architectures to users. Our method significantly improves the performance of applications with heavily imbalanced workloads, and enables trade-offs between workload imbalance and ALU underutilization for fine-tuning the performance. Our evaluation reveals that our method exhibits up to 9x speedup over previous GPU algorithms and 12x over single thread CPU execution on irregular graphs. When properly configured, it also yields up to 30% improvement over previous GPU algorithms on regular graphs. In addition to performance gains on graph algorithms, our programming method achieves 1.3x to 15.1x speedup on a set of GPU benchmark applications. Our study also confirms that the performance gap between GPUs and other multi-threaded CPU graph implementations is primarily due to the large difference in memory bandwidth.

References

YearCitations

Page 1