Abstract
Control and memory divergence between threads in the same execution bundle, or warp, can significantly throttle the performance of GPU applications. We exploit the observation that many GPU applications exhibit error tolerance to propose branch and data herding. Branch herding eliminates control divergence by forcing all threads in a warp to take the same control path. Data herding eliminates memory divergence by forcing each thread in a warp to load from the same memory block. To safely and efficiently support branch and data herding, we propose a static analysis and compiler framework to prevent exceptions when control and data errors are introduced, a profiling framework that aims to maximize performance while maintaining acceptable output quality, and hardware optimizations to improve the performance benefits of exploiting error tolerance through branch and data herding. Our software implementation of branch herding on NVIDIA GeForce GTX 480 improves performance by up to 34% (13%, on average) for a suite of NVIDIA CUDA SDK and Parboil [7] benchmarks. Our hardware implementation of branch herding improves performance by up to 55% (30%, on average). Data herding improves performance by up to 32% (25%, on average). Observed output quality degradation is minimal for several applications that exhibit error tolerance, especially for visual computing applications. For a more detailed exposition of this work, see [6].
Original language | English (US) |
---|---|
Pages (from-to) | 427-428 |
Number of pages | 2 |
Journal | Parallel Architectures and Compilation Techniques - Conference Proceedings, PACT |
DOIs | |
State | Published - Oct 22 2012 |
Event | 21st International Conference on Parallel Architectures and Compilation Techniques, PACT 2012 - Minneapolis, MN, United States Duration: Sep 19 2012 → Sep 23 2012 |
Keywords
- Control divergence
- Error tolerance
- GPGPU
- High performance
- Memory divergence