Publication | Closed Access
Scaling LAPACK panel operations using parallel cache assignment
37
Citations
28
References
2010
Year
Unknown Venue
Cluster ComputingMassively-parallel ComputingEngineeringLapack RoutinesHigh-performance ArchitectureParallel ProcessingParallel Performance EvaluationComputer ArchitectureComputer EngineeringMassive ParallelismParallel ProgrammingComputer ScienceParallel ComputingSupercomputer ArchitectureLapack Panel OperationsLu Panel FactorizationsParallel Tool
In LAPACK many matrix operations are cast as block algorithms which iteratively process a panel using an unblocked algorithm and then update a remainder matrix using the high performance Level 3 BLAS. The Level~3 BLAS have excellent weak scaling, but panel processing tends to be bus bound, and thus scales with bus speed rather than the number of processors (p). Amdahl's law therefore ensures that as p grows, the panel computation will become the dominant cost of these LAPACK routines. Our contribution is a novel parallel cache assignment approach which we show scales well with p. We apply this general approach to the QR and LU panel factorizations on two commodity 8-core platforms with very different cache structures, and demonstrate superlinear panel factorization speedups on both machines. Other approaches to this problem demand complicated reformulations of the computational approach, new kernels to be tuned, new mathematics, an inflation of the high-order flop count, and do not perform as well. By demonstrating a straight-forward alternative that avoids all of these contortions and scales with p, we address a critical stumbling block for dense linear algebra in the age of massive parallelism.
| Year | Citations | |
|---|---|---|
Page 1
Page 1