Share on Facebook Tweet on Twitter Share on LinkedIn Share by email

The goal of the Flashlight project at MSR Silicon Valley is to explore existing and new flash architectures and to build tools to aid in that endeavor.

CORFU (Clusters of Raw/Redundant Flash Units): Corfu organizes a cluster of flash devices into a single, coherent drive accessed by clients over the network. Each flash device in the cluster is a custom unit of low-power, low-cost hardware that allows raw flash to be attached directly to the network. Corfu uses client-side logic (a new variant of Paxos) to implement the abstraction of a single, cluster-scale drive.

The primary interface exposed by the Corfu drive to applications is a shared, globally-ordered log, which allows multiple clients to concurrently access the drive at high speeds without sacrificing consistency. Corfu supports other interfaces as well, including a linear address space; accordingly, multiple Corfu instances can be used to support a pool of non-shared, mountable volumes. Corfu's distributed nature allows flash bandwidth, capacity and write cycles to be incrementally scaled and shared across multiple clients. A single Corfu drive provides write throughput of up to 1M writes/sec, while read throughput scales linearly with the number of flash devices.

SSD Performance: We extend the popular trace-driven disk simulator DiskSim from CMU by introducing SSD (solid state disk) simulation module.   We then exercise our simulator under various real workload traces. This simulation allows us to explore how about how real I/O systems (for example those that support transaction processing systems) will perform when using SSDs rather than disks. We explore the performance of several potential organizations of flash chips and to test the efficacy of various cleaning and wear-leveling algorithms under real workloads. Our initial focus is on server-side workloads.

A download of our simulator is available here.   You can also browse a slide deck (.pptx) presented at WinHEC in Los Angeles and Taipei and a set of posters from MSR TechFest in 2008.

Flash Research Platform: We are building a flexible platform for solid state storage research by integrating FPGAs, DRAM, and Flash devices. The design leverages reconfigurable hardware to provide maximum flexibility for innovative architectural and algorithmic design of the next generation storage systems.

TxFlash: Traditional storage devices export block-based APIs. Supporting atomic multiple-page writes is desirable, but often comes with non-negligible performance penalties. We observe that such penalties might be significantly reduced for flash memory used in SSDs due to its specific properties such as non-overwrite page writes and fast random reads. In TxFlash, we develop a set of protocols for SSDs to support multiple-page writes with ACID properties, explore their performance characteristics, and assess the implications of such an API on higher-level applications such as file systems.

SSD Lifetime:Two trends can derail the adoption of Solid State Devices (SSDs) as a primary storage device: first, general purpose workloads are harder than mobile applications on flash; second, increasing flash densities result in decreased block erase cycles. This combination of stressful workload and fewer erase cycles can significantly reduce SSD lifetime. We propose a hybrid storage device that uses a hard disk drive (HDD) as a write cache for an SSD. Our design is motivated by two observations: First, HDDs can match the sequential write bandwidth of mid-range SSDs. Second, both server and desktop workloads contain a significant fraction of block overwrites. By maintaining a log-structured HDD cache and migrating cached data periodically, our hybrid design reduces writes to the SSD while retaining its excellent performance. We evaluated our system using a variety of I/O traces from Windows and find that it extends SSD lifetime by 2 times and reduces average I/O latency by 42%.

SSD Reliability: Redundancy schemes such as RAID-5 are highly susceptible to correlated failures when used on SSDs: when an old device fails, it's highly probable that some data is not recoverable from the remaining devices. This data loss occurs due to the fact that SSDs wear out and exhibit higher Bit Error Rates (BERs) as they receive more writes. Since conventional RAID schemes balance writes evenly across devices, they wear SSDs out at similar rates. Intuitively, such solutions attempt to protect data on aging devices by storing them on other, equally old devices. We propose Diff-RAID, a parity-based redundancy solution that creates an age differential in an array of SSDs. Diff-RAID distributes parity blocks unevenly across the array, leveraging their higher update rate to age devices at different rates. Diff-RAID is more reliable than RAID-5 for different flash chips by one or more orders of magnitude, and offers a smooth trade-off between reliability and throughput.