To create novel data center solutions, designs must be based on comprehensive optimization of all attributes, rather than gradually accruing incremental changes based on current technologies and best practices. The Cloud Computing Futures team is tasked to invent on a large scale. Our goal is to reduce data center costs by four-fold or greater, including power consumption, while accelerating deployment and increasing adaptability and resilience to failures.
About Cloud Computing Futures
The Cloud Computing Futures work begins with a key concept: the data center is a computer, and it must be designed and programmed as an integrated system. This means examining all aspects of data centers from first principles.
Microsoft cannot gain a sustainable competitive advantage by incrementally improving the “off-the-shelf design” of existing data center hardware and software. Considering the data center as a single artifact will allow us to re-architect the platform in radical ways. We will gain distinct advantages with a holistic approach to data center design. We must simplify in key areas to eliminate the outdated, redundant designs in today’s one-size-fits-all data centers. We will create new power, cooling, hardware, and software standards to advance simplicity and flexibility and true “lights out” operation. We will unite software flexibility with new hardware designs.
The goal is to design self-adapting systems that respond to diverse workload demands, without increasing system complexity and while lowering costs. We must invent new hardware/software solutions that solve loading, provisioning, performance, and other issues and enable an exciting new generation of applications and services, while maintaining compatibility with legacy APIs and services exposed on the edge. We will pursue invention and innovation that provides a competitive advantage to Microsoft in our data centers. We must reduce the time required to deploy new data center infrastructure, given changing business needs and rapidly evolving application and user expectations. We will explore packaging and delivery options that reduce data center deployment times from 1–2 years to a few months and server deployments to a few days.
- Sreenivas Addagatla, Mark Shaw, Suyash Sinha, Prashant Chandra, Ameya S. Varde, and Michael Grinkrug, Direct Network Prototype Leveraging Light Peak Technology, IEEE, August 2010.
- Keith Crochow, Bill Howe, Mark Stoermer, Roger Barga, and Ed Lazowska, Client + Cloud: Evaluating Seamless Architectures for Visual Data Analytics in the Ocean Sciences, in Proceedings of 22nd international conference on scientific and statistical database management., Springer Verlag, 28 June 2010.
- Eran Chinthaka Withana, Beth Plale, Roger Barga, and Nelson Araujo, Versioning for Workflow Evolution, in Proceedings of The Third International Workshop on Data Intensive Distributed Computing, Association for Computing Machinery, Inc., 21 June 2010.
- Wei Lu, Jared Jackson, and Roger Barga, AzureBlast: A Case Study of Developing Science Applications on the Cloud, in Proceedings of the 1st Workshop on Scientific Cloud Computing (Science Cloud 2010), Association for Computing Machinery, Inc., 21 June 2010.
- John R. Delaney and Roger S. Barga, Observing the Oceans - A 2020 Vision for Ocean Science, in The Fourth Paradigm: Data Intensive Scientific Discovery, Microsoft Research, 22 November 2009.