Data Parallel Haskell


Exploiting vector instructions with generalized stream fusion

Exploiting vector instructions with generalized stream fusion, Geoff Mainland, Simon Peyton Jones, Simon Marlow, Roman Leshchinskiy. ICFP 2013

Abstract Stream fusion is a powerful technique for automatically transforming high-level sequence-processing functions into efficient implementations. It has been used to great effect in Haskell libraries for manipulating byte arrays, Unicode text, and unboxed vectors. However, some operations, like vector append, still do not perform well within the standard stream fusion framework. Others, like SIMD computation using the SSE and AVX instructions available on modern x86 chips, do not seem to fit in the framework at all.

In this paper we introduce generalized stream fusion, which solves these issues. The key insight is to bundle together multiple stream representations, each tuned for a particular class of stream consumer. We also describe a stream representation suited for efficient computation with SSE instructions. Our ideas are implemented in modified versions of the GHC compiler and vector library. Benchmarks show that high-level Haskell code written using our compiler and libraries can produce code that is competitive with hand-tuned assembly.


Vectorisation avoidance

Vectorisation avoidance , Gabriele Keller, Manuel Chakravarty, Roman Leshchinskiy, Ben Lippmeier, and Simon Peyton Jones, Haskell Symposium, Copenhagen, 2012.

Abstract Flattening nested parallelism is a vectorising code transform that converts irregular nested parallelism into flat data parallelism. Although the result has good asymptotic performance, flattening thoroughly restructures the code. Many intermediate data structures and traversals are introduced, which may or may not be eliminated by subsequent optimisation. We present a novel program analysis to identify parts of the program where flattening would only introduce overhead, without appropriate gain. We present empirical evidence that avoiding vectorisation in these cases leads to more efficient programs than if we had applied vectorisation and then relied on array fusion to eliminate intermediates from the resulting code


Guiding parallel array fusion with index types

Work-efficient higher-order vectorisation , Ben Lippmeier, Manuel Chakravarty, Gabriele Keller, and Simon Peyton Jones, Haskell Symposium, Copenhagen, 2012.

Abstract We present a refined approach to parallel array fusion that uses indexed types to specify the internal representation of each array. Our approach aids the client programmer in reasoning about the performance of their program in terms of the source code. It also makes the intermediate code easier to transform at compile-time, resulting in faster compilation and more reliable runtimes. We demonstrate how our new approach improves both the clarity and performance of several end-user written programs, including a fluid flow solver and an interpolator for volumetric data.


Work Efficient Higher-Order Vectorisation

Work Efficient Higher-Order Vectorisation, Ben Lippmeier, Manuel CHakravarty, Gabriele Keller, and Roman Leshchinskiy, ICFP 2012, Copenhagen.

Abstract Existing approaches to higher-order vectorisation, also known as flattening nested data parallelism, do not preserve the asymptotic work complexity of the source program. Straightforward examples, such as sparse matrix-vector multiplication, can suffer a severe blow-up in both time and space, which limits the practicality of this method. We discuss why this problem arises, identify the mis-handling of index space transforms as the root cause, and present a solution using a refined representation of nested arrays. We have implemented this solution in Data Parallel Haskell (DPH) and present benchmarks showing that realistic programs, which used to suffer the blow-up, now have the correct asymptotic work complexity. In some cases, the asymptotic complexity of the vectorised program is even better than the original.


Work-efficient higher-order vectorisation

Work-efficient higher-order vectorisation , Ben Lippmeier, Manuel Chakravarty, Gabriele Keller, Roman Leshchinskiy, and Simon Peyton Jones Submitted to ICFP 2012.

Abstract

Existing approaches to higher-order vectorisation, also known as flattening nested data parallelism, do not preserve the asymptotic complexity of the source program. Straightforward examples, such as sparse matrix-vector multiplication, can suffer a severe (sometimes even exponential) blow-up in both time and space, which limits the practicality of this method. We discuss why this problem arises, identify the mis-handling of index space transforms as the root cause, and present a solution using a refined representation of nested arrays. We have implemented this solution in Data Parallel Haskell (DPH) and present benchmarks showing that realistic programs, which used to suffer the exponential blow-up, now have the correct asymptotic complexity. In some cases, the complexity of the vectorised program is even better than the original.

Efficient Parallel Stencil Convolution in Haskell

Efficient Parallel Stencil Convolution in Haskell , Ben Lippmeier, Gabriele Keller, and Simon Peyton Jones Submitted to ICFP 2011.

Abstract

Stencil convolution is a fundamental building block of many scientific and image processing algorithms. We present a declarative approach to writing such convolutions in Haskell that is both efficient at runtime and implicitly parallel. To achieve this we extend our prior work on the Repa array library with two new features: partitioned and cursored arrays. Combined with careful management of the interaction between GHC and its back-end code generator LLVM, we achieve performance comparable to the standard OpenCV library.

Regular, shape-polymorphic, parallel arrays in Haskell

Regular, shape-polymorphic, parallel arrays in Haskell , Gabriele Keller, Manuel Chakravarty, Roman Leshchinskiy, Simon Peyton Jones, and Ben Lippmeier. ICFP 2010.

Abstract

We present a novel approach to regular, multi-dimensional arrays in Haskell. The main highlights of our approach are that it (1) is purely functional, (2) supports reuse through shape polymorphism, (3) avoids unnecessary intermediate structures rather than relying on subsequent loop fusion, and (4) supports transparent parallelisation. We show how to embed two forms of shape polymorphism into Haskell's type system using type classes and type families. In particular, we discuss the generalisation of regular array transformations to arrays of higher rank, and introduce a type-safe specification of array slices. We discuss the runtime performance of our approach for three standard array algorithms. We achieve absolute performance comparable to handwritten C code. At the same time, our implementation scales well up to 8 processor cores.


Harnessing the multicores

Harnessing the Multicores: Nested Data Parallelism in Haskell, Simon Peyton Jones, Roman Leshchinskiy, Gabriele Keller, Manuel MT Chakravarty, Foundations of Software Technology and Theoretical Computer Science (FSTTCS'08), Bangalore, December 2008.

Abstract

If you want to program a parallel computer, a purely functional language like Haskell is a promising starting point. Since the language is pure, it is by-default safe for parallel evaluation, whereas imperative languages are by-default unsafe. But that doesn’t make it easy! Indeed it has proved quite difficult to get robust, scalable performance increases through parallel functional programming, especially as the number of processors increases.

A particularly promising and well-studied approach to employing large numbers of processors is data parallelism. Blelloch’s pioneering work on NESL showed that it was possible to combine a rather flexible programming model (nested data parallelism) with a fast, scalable execution model (flat data parallelism). In this paper we describe Data Parallel Haskell, which embodies nested data parallelism in a modern, general-purpose language, implemented in a state-of-the-art compiler, GHC.We focus particularly on the vectorisation transformation, which transforms nested to flat data parallelism.


Partial vectorisation

Partial vectorisation of Haskell programs, Manuel M. T. Chakravarty, Roman Leshchinskiy, Simon Peyton Jones, and Gabriele Keller, Proc ACM Workshop on Declarative Aspects of Multicore Programming, San Francisco, Jan 2008.

Abstract

Vectorisation for functional programs, also called the flattening transformation, relies on drastically reordering computations and restructuring the representation of data types. As a result, it only applies to the purely functional core of a fully-fledged functional language, such as Haskell or ML. A concrete implementation needs to apply vectorisation selectively and integrate vectorised with unvectorised code. This is challenging, as vectorisation alters the data representation, which must be suitably converted between vectorised and unvectorised code. In this paper, we present an approach to partial vectorisation that selectively vectorises sub-expressions and data types, and also, enables linking vectorised with unvectorised modules.


DPH status report 2007

Data Parallel Haskell: a status report, Manuel M. T. Chakravarty, Roman Leshchinskiy, Simon Peyton Jones, Gabriele Keller, and Simon Marlow, Proc ACM Workshop on Declarative Aspects of Multicore Programming, Nice, Jan 2007.

Abstract

We describe the design and current status of our effort to implement the programming model of nested data parallelism into the Glasgow Haskell Compiler. We extended the programming model and its implementation, both of which were first popularised by the NESL language, in terms of expressiveness as well as efficiency of its implementation. Our current aim is to provide a convenient programming environment for SMP parallelism, and especially multicore architectures. Preliminary benchmarks show that we are, at least for some programs, able to achieve good absolute performance and excellent speedups.

Simon Peyton Jones, simonpj@microsoft.com