Content Compression in Networks

With the advent of globalization, networked services have a global audience. For example, a large corporation today may have branch offices at dozens of cities around the globe. In such a setting, the corporation's IT admins and network planners face a dilemma. On the one hand, they could centralize or concentrate the servers that power the corporation's IT services (e.g., email servers, file servers) at one or a small number of locations. This would keep administration costs low but drive up network costs and also hurt performance, because, for instance, what would have normally been LAN traffic (e.g., between a client and a file server) would become WAN traffic. On the other hand, the servers and services could be distributed to be closer to clients. However, this would likely drive up the complexity and cost of developing and administering the services.

This project arises from the quest to have the best of both worlds, specifically, having the operational benefits of centralization along with the performance benefits of distribution. Protocol-independent content compression (e.g., Spring et. al., SIGCOMM 2000) is one technique that helps bridge the gap by making WAN communication more efficient through elimination of redundancy in traffic. Such compression is typically applied at the IP layer, for instance, using a pair of middleboxes placed at either end of a WAN link connecting a corporation's data center and a branch office. Each box stores the payload of the traffic flowing between them in a dictionary. When one box detects chunks of data in its packet buffer that matches entries in its dictionary (and hence is assumed to be in the implicitly synchronized dictionary of its counterpart), it simply sends pointers to the previously transmitted data rather than the actual data itself. The box at the far end reconstructs the data from the pointers before forwarding on the original data to the destination. Thus, in effect, these boxes operate as store-(de)compress-forward routers. Recently, content compression has seen increasing commercial deployment using products from Riverbed, Cisco etc. In these approaches, content compression is deployed mainly in point-to-point settings between two middleboxes across the bottleneck WAN link.

In order to better understand redundancy in network traffic, we conducted a large-scale study, driven by several terabytes of packet payload traces collected at 12 distinct network locations (see SIGMETRICS paper below for details). One of the surprising findings of the study was that 75+% of bandwidth savings obtained by an in-memory middlebox-based redundancy elimination system can be attributed to redundant byte-strings within each enterprise client's traffic. This implies that an end-to-end redundancy elimination solution could obtain a significant chunk of the middlebox's bandwidth savings.

We are currently designing and building content compression as an end host service called EndRE in enterprise systems (see NSDI paper below for details). This is joint work with collaborators at University of Wisconsin-Madison (Ashok Anand, Chitra Muthukrishnan, and Prof. Aditya Akella), George Varghese (University of California, San-Diego, Visitor in Summer'09) and Athula Balachandran (intern, CMU), and Pushkar Chitnis from the ADP group at Microsoft Research, India.

Pushkar V. Chitnis
Pushkar V. Chitnis