Concepedia

TLDR

MIMD parallel computers often require synchronizing processors after completing many independent subtasks. The study aims to minimize the expected makespan by optimally allocating subtasks to processors. Assuming independent, identically distributed subtasks with increasing failure rates and a fixed communication overhead per processor, the authors use renewal theory, reliability theory, order statistics, and large‑deviation theory to analyze allocation strategies. Allocating an equal number of subtasks to each processor all at once achieves good efficiency, a result that follows from a general theorem extending central‑limit‑type behavior even when the CLT cannot be proved.

Abstract

When using MIMD (multiple instruction, multiple data) parallel computers, one is often confronted with solving a task composed of many independent subtasks where it is necessary to synchronize the processors after all the subtasks have been completed. This paper studies how the subtasks should be allocated to the processors in order to minimize the expected time it takes to finish all the subtasks (sometimes called the makespan). We assume that the running times of the subtasks are independent, identically distributed, increasing failure rate random variables, and that assigning one or more subtasks to a processor entails some overhead, or communication time, that is independent of the number of subtasks allocated. Our analyses, which use ideas from renewal theory, reliability theory, order statistics, and the theory of large deviations, are valid for a wide class of distributions. We show that allocating an equal number of subtasks to each processor all at once has good efficiency. This appears as a consequence of a rather general theorem which shows how some consequences of the central limit theorem hold even when we cannot prove that the central limit theorem applies.

References

YearCitations

Page 1