MIMD on GPU
The MIMD (Multiple Instruction, Multiple Data) execution model is more flexible than SIMD (Single Instruction, Multiple Data), but SIMD hardware is more scalable. GPU (Graphics Processing Unit) hardware uses a SIMD model with various additional constraints that make it even cheaper and more efficient, but harder to program. Is there a way to get the power and ease of use of MIMD programming models while targeting GPU hardware?
This talk discusses a compiler, assembler, and interpreter system that allows a GPU to implement a richly-featured MIMD execution model that supports message-passing and shared-memory communication, recursion, etc. Through a variety of careful design choices and optimizations, performance per unit circuit complexity executing MIMD code on both NVIDIA and AMD/ATI GPUs can be much higher than for native MIMD hardware. The discussion covers both the methods used and their motivation in terms of the relevant aspects of GPU architecture.
Speaker Details
Upon completing his Ph.D. at Polytechnic University (now NYU-Poly) in 1986, Henry G. (Hank) Dietz joined the Computer Engineering faculty at Purdue University’s School of Electrical and Computer Engineering. In 1999, he moved to the University of Kentucky, where he is a Professor of Electrical and Computer Engineering and the James F. Hardymon Chair in Networking. Despite approximately 200 scholarly publications mostly in the fields of compilers and parallel processing, he is perhaps best known for his research group’s more practical research products, including: barrier synchronization hardware, PCCTS/Antlr compiler tools, world’s first Linux PC cluster supercomputer and the Parallel Processing HOWTO, SWAR (SIMD Within a Register), FNNs (computer-evolved Flat Neighborhood Networks), MOG (MIMD On GPU), and single-shot Anaglyph capture. Much of his work is freely distributed via the Aggregate.Org research consortium, which he leads. Dietz also is an active teacher, and was one of the founders of the EPICS (Engineering Projects In Community Service) program. He is a member of both ACM and IEEE.
- Series:
- Microsoft Research Talks
- Date:
- Speakers:
- Henry Gordon Dietz
- Affiliation:
- University of Kentucky
-
-
Jeff Running
-
Series: Microsoft Research Talks
-
Decoding the Human Brain – A Neurosurgeon’s Experience
Speakers:- Pascal Zinn,
- Ivan Tashev
-
-
-
-
Galea: The Bridge Between Mixed Reality and Neurotechnology
Speakers:- Eva Esteban,
- Conor Russomanno
-
Current and Future Application of BCIs
Speakers:- Christoph Guger
-
Challenges in Evolving a Successful Database Product (SQL Server) to a Cloud Service (SQL Azure)
Speakers:- Hanuma Kodavalla,
- Phil Bernstein
-
Improving text prediction accuracy using neurophysiology
Speakers:- Sophia Mehdizadeh
-
-
DIABLo: a Deep Individual-Agnostic Binaural Localizer
Speakers:- Shoken Kaneko
-
-
Recent Efforts Towards Efficient And Scalable Neural Waveform Coding
Speakers:- Kai Zhen
-
-
Audio-based Toxic Language Detection
Speakers:- Midia Yousefi
-
-
From SqueezeNet to SqueezeBERT: Developing Efficient Deep Neural Networks
Speakers:- Sujeeth Bharadwaj
-
Hope Speech and Help Speech: Surfacing Positivity Amidst Hate
Speakers:- Monojit Choudhury
-
-
-
-
-
'F' to 'A' on the N.Y. Regents Science Exams: An Overview of the Aristo Project
Speakers:- Peter Clark
-
Checkpointing the Un-checkpointable: the Split-Process Approach for MPI and Formal Verification
Speakers:- Gene Cooperman
-
Learning Structured Models for Safe Robot Control
Speakers:- Ashish Kapoor
-