Publication | Open Access
Topology-aware GPU scheduling for learning workloads in cloud environments
57
Citations
30
References
2017
Year
Unknown Venue
Cluster ComputingEngineeringComputer ArchitectureGpu ComputingData ScienceEmbedded Machine LearningParallel ComputingTopology-aware Gpu SchedulingRecent AdvancesComputer EngineeringMultiple GpusComputer ScienceDeep LearningGpu ClusterGpu ArchitectureHardware AccelerationEdge ComputingCloud ComputingParallel Programming
Recent advances in hardware, such as systems with multiple GPUs and their availability in the cloud, are enabling deep learning in various domains including health care, autonomous vehicles, and Internet of Things. Multi-GPU systems exhibit complex connectivity among GPUs and between GPUs and CPUs. Workload schedulers must consider hardware topology and workload communication requirements in order to allocate CPU and GPU resources for optimal execution time and improved utilization in shared cloud environments.
| Year | Citations | |
|---|---|---|
Page 1
Page 1