Opening Remarks: 9.15-9.30
Session 1: 9.30 - 10.30
- Paolo Costa, Microsoft Research
Title: Towards Rack-scale Computing: Challenges and Opportunities
New hardware technology such as systems- and networks-on-chip (SOCs and NOCs), switchless network fabrics, silicon photonics, and RDMA, are redefining the landscape of data center computing, enabling interconnecting thousands of cores at high speed at the scale of today's racks. We refer to this new class of hardware as rack-scale computers because the rack is increasingly replacing the individual server as the basic building block of modern data centers. Most of the benefits promised by these new architectures, however, can only be achieved with adequate support from the software stack. In this talk, I will describe some of the research challenges and the opportunities introduced by rack-scale computing. As a concrete example, I will provide an overview of some of the research projects related to this topic that we are pursuing in our group.
- Tyler Szepesi, Bernard Wong, Ben Cassell, Tim Brecht, University of Waterloo
Designing A Low-Latency Cuckoo Hash Table for Write-Intensive Workloads (slides)
Break with Refreshments: 10.30 - 11ddd
Session 2: 11 - 12.30
- Martin Maas, Krste Asanovic, UC Berkeley, Tim Harris, Oracle Labs, John Kubiatowicz, UC Berkeley.
The Case for the Holistic Language Runtime System (slides)
- Jacob Nelson, Brandon Holt, Brandon Myers, Preston Briggs, Simon Kahan, Luis Ceze, Mark Oskin, University of Washington.
Grappa: A Latency-Tolerant Runtime for Large-Scale Irregular Application (slides)
- Tim Harris, Maurice Herlihy, Yossi Lev, Oracle Labs, Yujie Liu, Lehigh University, Victor Luchangco, Virendra Marathe, Mark Moir, Oracle Labs.
Towards Whatever-Scale Abstractions for Data-Driven Parallelism
Session 3: 14-15.30
Gustavo Alonso, ETH Zurich
Title: Rackscale - the things that matter (slides)
Rackscale computing has become the standard for many applications running on a data center. For a variety of reasons, today it is possible to developed fully customized solution that achieve impressive performance numbers. In this talk I will argue that customization is important but needs to be sustained by general purpose techniques and components. The research agenda in the next years should focus on the latter, rather than on producing an infinity variety of high performance systems tailored for narrow use cases. Otherwise, the inevitable problems with total cost of ownership during the life cycle of a real systems (maintenance, further development, software evolution, additional functionality) will soon catch up with many existing proposals.
Tim Harris, Oracle Labs
Title: What We Talk About When We Talk About Scheduling (slides)
Distributed workloads involve scheduling and resource allocation decisions at multiple levels of the stack: deciding which machines a job will run on; deciding which instances of replicated services they will use; arbitrating within those services between multiple clients; deciding which VM's virtual CPUs get which physical cores, and which threads run on those CPUs; deciding how the instructions in those threads are scheduled in multi-threaded processors, and how they compete for resources in the cores and in the interconnect. In this talk I will illustrate the kind of interference that can occur at these different levels, and talk about some possible approaches for handling these problems.
Sanjeev Kumar, Facebook
Title: Efficiency at Scale (slides)
Joint poster session: 15.30 - 16.30ddd
Panel session: 16.30-17.30
Panelists: Gustavo Alonso, ETH Zurich, Steve Hand, Microsoft Research, Tim Harris, Oracle Labs,
Sanjeev Kumar, Facebook, Ant Rowstron, Microsoft Research
Call for Papers (PDF)
In the near future we will see "rack-scale computers" with 1000s of cores, terabytes of memory, high-bandwidth and low-latency internal fabrics. These architectures are being driven by the need to increase density and connectivity between servers while lowering cost and power consumption. Enabling technologies such as systems-on-chip (SoCs), glueless fabrics, silicon photonics, and RDMA are already available today as are early prototypes of rack-scale computing architectures from companies such as AMD SeaMicro, HP, and Intel.
These new architectures raise several interesting research questions. Should they be considered as large shared-memory NUMA servers, as traditional distributed systems, or a combination of the two? What are the correct communication primitives to let applications benefit from low-latency remote access? How should the fabric be organized, and how should CPUs, DRAM, and storage be placed in it? What are the likely failure modes and how do we achieve fault tolerance? How should we integrate rack-scale computers into data center networks? How can researchers effectively prototype and test novel ideas in this space?
Answering these questions requires a multidisciplinary effort. The goal of this workshop is to bring together researchers and practitioners from different areas (hardware architectures, networking, operating systems, storage, distributed systems, and HPC) and discuss novel ideas on how to design next-generation rack-scale systems.
We invite submissions on hardware, networking, systems designs, and applications for rack-scale computing. We especially welcome cross-layer approaches such as hardware-software co-design, and encourage unfinished but potentially ground-breaking research ideas.
Submissions can be on any aspect of rack-scale computing, including but not limited to:
- Systems-on-chip (SoCs) and Networks-on-chip (NoCs)
- Rack-scale fabrics: topologies, routing, congestion control
- OS and application design for rack-scale computing
- FPGA-based prototyping and design
- Memory and storage disaggregation
- Coherency, consistency, and fault tolerance
- QoS and virtualization
- Low-energy and/or high-density design
WRSC 2014 is co-located with EuroSys 2014
Sunday, April 13, 2014
- Questions: email@example.com
Paolo Costa, Microsoft Research (Program Chair)
Dushyanth Narayanan, Microsoft Research (General Chair)
- Gustavo Alonso, ETH Zurich
- Edouard Bugnion, EPFL
- Luis Ceze, U. Washington
- Paolo Costa, Microsoft Research
- Leendert van Doorn, AMD
- Babak Falsafi, EPFL
- Blake Fitch, IBM Research
- Nathan Farrington, Facebook
- Tim Harris, Oracle Labs
- Michael Kaminsky, Intel Labs
- Dushyanth Narayanan, Microsoft Research
- Parthasarathy (Partha) Ranganathan, Google
- Luigi Rizzo, U. Pisa
- Thomas Wenisch, U. Michigan
- Bernard Wong, U. Waterloo