Exploiting Parallelism with Dependence-Aware Scheduling

Xiaotong Zhuang, Alexandre Eichenberger, Yangchun Luo, Kevin O’Brien and Kathryn O’Brien

It is well known that a large fraction of applications cannot be parallelized at compile time because of unpredictable data dependences due to indirect memory accesses and/or memory accesses guarded by data-dependent conditional statements. A significant body of prior work attempts to parallelize such applications using runtime data-dependence analysis and scheduling. Performance is highly dependent on the ratio of the dependence analysis overheads with respect to the actual amount of parallelism available in the code. We have found that the overheads are often high and the available parallelism is often low when evaluating applications on a modern multicore processor.We propose a novel software-based approach called dependence-aware scheduling to parallelize code with unknown data dependences. Unlike prior work, our main goal is to reduce the negative impact of dependence computation, such that when there is not an opportunity of getting speedup, the code can still run without much slowdown. If there is an opportunity, dependence-aware scheduling is able to yield very impressive speedup.Our results indicate that dependence-aware scheduling can greatly improve performance, with up to 4x speedups, for a number of computation intensive applications. Furthermore, the results also show negligible slowdowns in a stress test, where parallelism is continuously detected but not exploited.

Back to Program