ASPLOS XIII, March 1-5, Seattle, WA
     Home Page     
     Registration     
     Hotel Reservation     
     Local Information     
     Program     
     Posters     
     Tutorials     
     Workshops     
     Wild/Crazy Ideas     
     Social Events
     Committees     
     Important Dates     
    Travel Grants     
     Submission     
     Corporate Sponsorship     
Thirteenth International Conference on
Architectural Support for Programming Languages and Operating Systems
(ASPLOS '08)


Workshops and Tutorials



 

Sound Room

Cove Room

Marina Room

Pacific Room

Saturday AM

 

Vista tutorial

Pin tutorial

 

Saturday PM

 

Singularity tutorial

Multicore-cache tutorial

 

Sunday AM

MSPC workshop

M5 tutorial

RAMP tutorial

LIT tutorial

Sunday PM

MSPC workshop

NVidia GPU tutorial

RAMP tutorial

LIT tutorial

 Morning (AM) and all day events start at 8.30am, afternoon (PL) events start at 1.30pm.
Morning events run 8.30am-12.30pm, afternoon events run 1.30pm-5.30pm.

Vista: Architecture of the Windows Vista Kernel (tutorial)

Singularity: Using the Singularity Research Development Kit (tutorial)

Pin: Hands-On Pin for Architecture, Operating Systems, and Program Analysis Research (tutorial)

Multicore-cache: Does multicore change the way we should design caches? (tutorial)

M5: Using the M5 Simulator (tutorial)

NVidia: NVidia GPU programming (tutorial)

RAMP: Research Accelerator for Multiprocessors (tutorial)

LIT: Learning and Inference Tutorial (LIT) for Large Design and Parameter Spaces (tutorial)

MSPC: Memory System Performance and Correctness 2008 (workshop)

 

Tutorial/Workshop Descriptions

  • Vista: Architecture of the Windows Vista Kernel (tutorial)
    Organizer: Dave Probert (Microsoft)
    Abstract: Windows is very successful commercially, but until recently little information has been available about the core architecture. In this tutorial we propose to present a detailed overview of the organization of Windows and the architecture and design of the kernel, including its major modules. The recent Vista and Server 2008 releases include a major re-write of the TCP/IP networking stack, which runs on top of the kernel but uses offloading, provides integrated IPv4/IPv6 support, and contains improved architectural support for measurement and instrumentation.

  • Singularity: Using the Singularity Research Development Kit (tutorial)
    Organizer: Galen Hunt and Jim Larus (Microsoft Research)
    Abstract:
    The Singularity operating system has been under development and used as a research vehicle by Microsoft Research over the last 4 years.  Singularity systems incorporate three key architectural features: software-isolated processes for protection of programs and system services, contract-based channels for communication, and manifest-based programs for verification of system properties.  

    This tutorial marks the broad release of the Singularity Research Development Kit, a complete package of the sources and tools required to build Singularity and to use it as a research platform.  We’ll describe the system in detail and walk through key features of the code and the build system.  Tutorial attendees will walk away with the knowledge necessary to use Singularity as a platform for their own OS and architecture research.

  • Pin: Hands-On Pin for Architecture, Operating Systems, and Program Analysis Research (tutorial)
    Organizer: Kim Hazelwood (Virginia)
    Abstract:
    The tutorial targets researchers, students, and educators alike, and provides a detailed look at Pin as a mechanism for rapid prototyping. The tutorial is comprised of three learning components. The first component provides a brief insight into the workings of Pin, and introduces its fundamental instrumentation structures and concepts. The second component introduces useful Pin-based tools that are freely available for download, and presents advanced mechanisms for reducing runtime overheads. The last component integrates the first two components via a hands-on session allowing users to gain immediate experience in understanding the ease of writing fundamental, as well as advanced Pin tools.

  • Multicore-cache: Does multicore change the way we should design caches? (tutorial)
    Organizer: Hillery Hunter (IBM)
    Abstract:
    As technology moves forward, innovative advancements will rely on researchers at the architecture, compiler, and software levels becoming familiar with the bottlenecks of silicon scaling.  Two trends are of particular note:  (1) In many microprocessor families, increasing amounts of silicon area are being devoted to caches, and (2) Technology variability is causing stability concerns for our fundamental cache building block – six-transistor SRAM.  For each of these concerns, there are aggravating factors present in many proposed multicore designs: (1) Where die size is kept relatively equal to that of prior generations, inclusion of multiple cores, even if lighter-weight, often leaves less silicon area available for caches; and (2) Many proposals suggest complex multi-voltage management schemes, to deal with power consumption, thus aggravating and complicating circuit-level challenges in the face of increasing transistor-level variability.  In light of these factors, this tutorial will examine the fundamentals of cache design, and alternatives to six-transistor SRAM.

  • M5: Using the M5 Simulator (tutorial)
    Organizer: Ali Saidi (Michigan)
    Abstract:
    The goal of the tutorial is to introduce participants to the M5 full-system simulator and all of its new features. The M5 simulator has been used in over 50 publications. It now has even more ISA support, the ability to boot both Linux and Solaris, several CPU models, a new cache system, and more. M5 is freely distributable under a BSD-style license, and does not depend on any commercial or restricted-license software. Broadly the tutorial will cover the capabilities, use, and extension of M5. In particular we will cover how to configure and run simulations, including the required inputs and the outputs generated. Furthermore an introduction to the "out-of the box" capabilities of the simulator will be presented including the various CPU models, I/O devices, and memory models. Finally, we will discuss how to extend M5 through the use of its event queues, object base classes and the built-in debugging of both the M5 binary itself and the guest code running within M5. The tutorial is intended for researchers in academia or industry looking for a free, open-source, full-system simulation environment for processor, system, or platform architecture studies. No specific prerequisite knowledge is required other than some familiarity with system or platform architecture research.

  • NVidia: NVidia GPU programming (tutorial)
    Organizer: David Luebke (Nvidia)
    Abstract:
    Modern GPUs provide a level of massively parallel computation that was once the preserve of supercomputers like the MasPar and Connection Machine. NVIDIA's Tesla architecture for GPU Computing provides a fully programmable, massively multithreaded chip with up to 128 scalar processor cores and capable of delivering hundreds of billions of operations per second. Researchers across many scientific and engineering disciplines are using this platform to accelerate important computations by up to 2 orders of magnitude.

    In this tutorial, we will provide an overview of the Tesla architecture and explore the transition it represents in massively parallel computing: from the domain of supercomputers to that of commodity "manycore" hardware available to all. We will also introduce CUDA, a scalable parallel programming model and software environment for parallel programming. By providing a small set of readily understood extensions to the C/C++ languages, CUDA allows programmers to focus on writing efficient parallel algorithms without the burden of learning a multitude of new programming constructs. Finally, as the GPU is the only widely available commodity "manycore" chip available today, we will explore its importance as a research platform for exploring important issues in parallel programming and architecture.

  • RAMP: Research Accelerator for Multiprocessors (tutorial)
    Organizer: Mark Oskin (Washington)
    Abstract:
    The Research Accelerator for Multiprocessors (RAMP) project offers an attractive alternative to simulation. The mission is to create an infrastructure on which to conduct parallel computing research, in part by creating FPGA-based hardware available at low cost and in part by distributing software and “gateware” for free using open source. RAMP is a joint effort involving both academia-- Berkeley, CMU, MIT, Stanford, Texas, and Washington--and industry-- IBM, Microsoft, Sun Microsystems, and Xilinx. See ramp.eecs.berkeley.edu.

    The goal of this tutorial is to jump start the use of RAMP in parallel computing research community, which includes computer architecture, compilers, operating systems, and so on.  By providing a clear foundational motivation for why RAMP exists, a clear demonstration of RAMPs capabilities (including demonstration of a very- large scale multiprocessor), a hands-on tutorial of RAMP, AND take- home hardware that participants can start using in their hotel rooms, this tutorial to could mark a fork in the road signifying the switch from software simulation to hardware emulation of new architectural ideas.


  • LIT: Learning and Inference Tutorial (LIT) for Large Design and Parameter Spaces (tutorial)
    Organizer: Martin Schulz (LLNL)
    Abstract:
    Increasing system and algorithmic complexity combined with a growing number of tunable architectural parameters pose significant challenges for both simulator driven design evaluation and application performance modeling. In this hands-on tutorial we present a series of robust techniques to address these challenges. We first show how to apply statistical techniques such as clustering, association, and correlation analysis to understand application or architectural performance across large parameter spaces using sparse sampling. We then provide detailed instructions on how to construct two classes of effective predictive models based on piecewise polynomial regression and artificial neural networks.

  • MSPC: Memory System Performance and Correctness 2008




ASPLOS-XIII is sponsored by:
ACMSIGARCH, SIGPLAN and SIGOPS

Corporate supporters:



ASPLOS-XIII Home Page
Comments? Suggestions? larus@microsoft.com