Concepedia

Publication | Open Access

Topology-aware GPU scheduling for learning workloads in cloud environments

57

Citations

30

References

2017

Year

Abstract

Recent advances in hardware, such as systems with multiple GPUs and their availability in the cloud, are enabling deep learning in various domains including health care, autonomous vehicles, and Internet of Things. Multi-GPU systems exhibit complex connectivity among GPUs and between GPUs and CPUs. Workload schedulers must consider hardware topology and workload communication requirements in order to allocate CPU and GPU resources for optimal execution time and improved utilization in shared cloud environments.

References

YearCitations

Page 1