The past five years has seen a resurgence of work on replicated, distributed database systems, to meet the demands of intermittently-connected clients and disaster-tolerant database systems that span data centers. Each product or prototype uses a weakened definition of replica-consistency or isolation, and in some cases new mechanisms, to obtain improvements in partition-tolerance, availability, and performance. We have developed a framework for defining and comparing weaker consistency and isolation properties. We show how these weaker properties affect the programming model and how they are leveraged by new mechanisms. Although we don’t recommend one solution above all others, we hope this framework will help architects navigate through this complex design space.
The outcome of this survey and analysis was a tutorial presented at the ACM International Conference on Management of Data (SIGMOD), June 2013 in New York. We also have a five-page paper that appeared at SIGMOD 2013.
The slide deck is here, which is the most complete and up-to-date version.
We also presented the material as keynotes at the New England Database Summit 2013, 19th International Conference on Management of Data (COMAD 2013), the 33rd International Conference on Distributed Computing Systems, and the 2013 VLDB Summer School in Shanghai.
- Phil Bernstein and Sudipto Das, Rethinking Eventual Consistency, in Proceedings of the 2013 ACM SIGMOD International Conference on Management of Data , ACM SIGMOD, 22 June 2013