Leveraging Fine-Grained Multithreading for Efficient SIMD Control Flow

Recent advances in graphics processing units (GPUs) have resulted in massively parallel hardware that is easily programmable and widely available in commodity desktop computer systems. GPUs typically use single-instruction, multiple-data (SIMD) pipelines to achieve high performance with minimal area overhead. Scalar threads are grouped together into SIMD batches (sometimes referred to as warps). While SIMD is ideally suited for simple programs, recent GPUs include control flow instructions in the GPU instruction set architecture and applications(in particular, “general purpose” non-graphics applications) using these instructions may experience reduced performance due to the occurrence of diverging branch outcomes for different processing elements.

In this talk I will present a hardware mechanism that leverages parallelism and the presence of multi-threading hardware to achieve more efficient SIMD branch execution. This mechanism dynamically regroups threads into new SIMD thread ‘warps’ on the fly following the occurrence of diverging branch outcomes and improves performance by an average of 20.7% for an estimated area increase of 4.7%.

Speaker Details

Tor M. Aamodt is an Assistant Professor in the Department of Electrical and Computer Engineering at the University of British Columbia. He earned his B.A.Sc., M.A.Sc. and Ph.D. degrees at the University of Toronto. Prior to joining the faculty at UBC, he worked at NVIDIA on the memory system of G80, and while a Ph.D. student, at Intel Corporation in the Microarchitecture Research Lab.

Date:
Speakers:
Tor Aamodt
Affiliation:
University of British Columbia