Publication | Closed Access
Compiler optimizations for eliminating barrier synchronization
127
Citations
25
References
1995
Year
Unknown Venue
Cluster ComputingEngineeringCompiler TechnologyBarrier SynchronizationComputer ArchitectureStanford Suif CompilerSoftware AnalysisParallel SoftwareCompilersParallel ComputingData CommunicationParallelizing CompilerCompiler SupportComputer EngineeringComputer ScienceNovel Compiler OptimizationsOptimizing CompilerProgram AnalysisFormal MethodsParallel Programming
This paper presents novel compiler optimizations for reducing synchronization overhead in compiler-parallelized scientific codes. A hybrid programming model is employed to combine the flexibility of the fork-join model with the precision and power of the single-program, multiple data (SPMD) model. By exploiting compile-time computation partitions, communication analysis can eliminate barrier synchronization or replace it with less expensive forms of synchronization. We show computation partitions and data communication can be represented as systems of symbolic linear inequalities for high flexibility and precision. These optimizations has been implemented in the Stanford SUIF compiler. We extensively evaluate their performance using standard benchmark suites. Experimental results show barrier synchronization is reduced 29% on average and by several orders of magnitude for certain programs.
| Year | Citations | |
|---|---|---|
Page 1
Page 1