TY - JOUR
T1 - Compiler and hardware support for reducing the synchronization of speculative threads
AU - Zhai, Antonia
AU - Steffan, J. Gregory
AU - Colohan, Christopher B.
AU - Mowry, Todd C.
PY - 2008/5/1
Y1 - 2008/5/1
N2 - Thread-level speculation (TLS) allows us to automatically parallelize general-purpose programs by supporting parallel execution of threads that might not actually be independent. In this article, we focus on one important limitation of program performance under TLS, which stalls as a result of synchronizing and forwarding scalar values between speculative threads that would otherwise cause frequent data dependences and, hence, failed speculation. Using SPECint benchmarks that have been automatically transformed by our compiler to exploit TLS, we present, evaluate in detail, and compare both compiler and hardware techniques for improving the communication of scalar values. We find that through our dataflow algorithms for three increasingly aggressive instruction scheduling techniques, the compiler can drastically reduce the critical forwarding path introduced by the synchronization and forwarding of scalar values. We also show that hardware techniques for reducing synchronization can be complementary to compiler scheduling, but that the additional performance benefits are minimal and are generally not worth the cost.
AB - Thread-level speculation (TLS) allows us to automatically parallelize general-purpose programs by supporting parallel execution of threads that might not actually be independent. In this article, we focus on one important limitation of program performance under TLS, which stalls as a result of synchronizing and forwarding scalar values between speculative threads that would otherwise cause frequent data dependences and, hence, failed speculation. Using SPECint benchmarks that have been automatically transformed by our compiler to exploit TLS, we present, evaluate in detail, and compare both compiler and hardware techniques for improving the communication of scalar values. We find that through our dataflow algorithms for three increasingly aggressive instruction scheduling techniques, the compiler can drastically reduce the critical forwarding path introduced by the synchronization and forwarding of scalar values. We also show that hardware techniques for reducing synchronization can be complementary to compiler scheduling, but that the additional performance benefits are minimal and are generally not worth the cost.
KW - Automatic parallelization
KW - Chip-multiprocessing
KW - Instruction scheduling
KW - Thread-level speculation
UR - http://www.scopus.com/inward/record.url?scp=48849089280&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=48849089280&partnerID=8YFLogxK
U2 - 10.1145/1369396.1369399
DO - 10.1145/1369396.1369399
M3 - Article
AN - SCOPUS:48849089280
SN - 1544-3566
VL - 5
SP - 1
EP - 33
JO - Transactions on Architecture and Code Optimization
JF - Transactions on Architecture and Code Optimization
IS - 1
ER -