Solving the parallel programming problem: patterns, programmability and choice

How do we get programmers to routinely write parallel software? We have been working in earnest on this problem for over 25 years; actually much longer if you consider that the first multi-threaded machine appeared in 1958 (the Gamma 60 by Bull). But at this point, I’m not sure we’re really getting any closer to solving it.

If we study the history of parallel programming, it is clear that in order to solve the parallel programming problem we need to: (1) understand how people write parallel software (mine the key design patterns), (2) agree on how to discuss programmability, and (3) stop scaring away our software developers. In this talk, I will describe these issues and how my (our?) collaborations at the UC Berkeley’s ParLab are addressing them.

Speaker Details

Tim Mattson is an “old” parallel applications programmer. He has worked with way too many parallel computers including the cosmic cube, Cray 2 vector computers, SMP machines form SGI and Sequent, NUMA computers such as the Altix, clusters (starting way before Beowulf) and of course Intel parallel supercomputers (iPSC/2 to ASCI Red to the 80 core TeraScale processor). Along the way, he has used dozens of parallel programming languages and was actively involved with the establishment of several (Strand, Linda, HPF, MPI, OpenMP, and most recently, OpenCL).Dr. Mattson has been at Intel working on parallel computing since 1993. Currently he is Intel’s parallel computing evangelist which basically means he is doing whatever he can to make serial software rare. This includes work on message passing many core chips and design pattern languages for parallel programming (in collaboration with the ParLab at UC Berkeley).

Date:
Speakers:
Tim Mattson
Affiliation:
Intel Corporation - Application Research Lab