Embedding Leveled Hypercube Algorithms into Hypercubes
When a leveled hypercube algorithm (one dimension used at a time) is
mapped in the straightforward way into a hypercube in which all edges
are useable at once, most of the host machine's bandwidth is unused.
This paper shows how to construct embeddings which better utilize the
host's edges. In particular, I show how to map an n-dimensional
leveled hypercube algorithm into an n-dimensional host hypercube so
that the communication throughput of every guest edge is
Theta((n/log n)log_6 2) = omega(n0.386) times
the communication throughput of a host edge. Furthermore, the routing
can be done on edge-disjoint paths of length at most n. This result
can be applied to other algorithms that are run on hypercubes. For
example, if an algorithm runs on a mesh with a axes each of length
2l, but uses only one axis at a time, then it can be embedded in an
la-dimensional hypercube so that each mesh edge has throughput
Theta(l(a/log a)log_6 2).
SPAA '92, 4th Annual ACM Symposium on Parallel Algorithms and Architectures
, pp. 264-270, 1992.
Scanned version in the ACM's Digital Library