Share on Facebook Tweet on Twitter Share on LinkedIn Share by email
New Results in Networking Research 2013
Sponsored by the Mobility and Networking Research Group
Microsoft Research, Redmond, Washington, USA
12:00 – 4:00 PM, Thursday, Dec 5, 2013
Microsoft Building 99, Room 1927 Lecture Room B

Join us for a half-day summit to hear about recent advances in networking and systems from some of the leading researchers in academia.

12.00PM-12.15PM – Lunch

12.15 to 1.15PM : Session I

  • Gaining control of cellular traffic accounting by spurious TCP retransmission
    KyoungSoo Park, KAIST
  • Detecting Price Discrimination in E-Commerce
    Nikolaos Laoutaris, Telefonica
  • Vision for Verifiable Network Function Outsourcing
    Vyas Sekar, Stony Brook University

1.15PM to 1.30PM – Break

1.30 to 2.30PM : Session II

2.30PM to 2.40PM – Break

2.40 to 4.00PM : Session III

  • Alidade: IP Geolocation without Active Probing
    Bruce Maggs, Duke University and Akamai
  • Mapping the Expansion of Google’s Serving Infrastructure
    Ramesh Govindan, USC
  • Transforming Wi-Fi Into a Sensor
    Shyam Gollakota, University of Washington
  • Multi-tenant Resource Allocation For Shared Cloud Storage
    Mike Freedman, Princeton University

 

1. Gaining control of cellular traffic accounting by spurious TCP retransmission

KyoungSoo Park, KAIST

Packet retransmission is a fundamental TCP feature that ensures reliable data transfer between two end nodes. Interestingly, when it comes to cellular data accounting, TCP retransmission creates an important policy issue. Cellular ISPs might argue that all retransmitted IP packets should be accounted for billing since they consume the resources of their infrastructures. On the other hand, the service subscribers might want to pay only for the application data by taking out the amount for retransmission. Regardless of the policies, however, we find that TCP retransmission can be easily abused to manipulate the current practice of cellular traffic accounting.

In this work, we investigate the TCP retransmission accounting policies of 12 cellular ISPs in the world and report the accounting vulnerabilities with TCP retransmission attacks. First, we find that cellular data accounting policies vary from ISP to ISP. While the majority of cellular ISPs blindly account for every IP packet, some ISPs intentionally remove the retransmission packets from the user bill for fairness. Second, we show that it is easy to launch the “usage-inflation” attack on the ISPs that blindly account for every IP packet. In our experiments, we could inflate the usage up to the monthly limit only within 9 minutes of the attack completely without the knowledge of the subscriber. For those ISPs that do not account for retransmission, we successfully launch the “free-riding” attack by tunneling the payload under fake TCP headers that look like retransmission. To counter the attacks, we argue that the ISPs should consider ignoring TCP retransmission for billing while detecting the tunneling attacks by deep packet inspection. We implement and evaluate Abacus, a light-weight accounting system that reliably detects “free-riding” attacks even in the 10 Gbps links.

2. Detecting Price Discrimination in E-Commerce

Nikolaos Laoutaris, Telefonica

After years of speculation, price discrimination in e-commerce driven by the personal information that users leave (involuntarily) online, has started attracting the attention of privacy researchers, regulators, and the press. In our previous work we demonstrated instances of products whose prices varied online depending on the location and the characteristics of perspective online buyers. In an effort to scale up our study we have turned to crowd-sourcing. Using a browser extension we have collected the prices obtained by an initial set of 340 test users as they surf the web for products of their interest. This initial dataset has permitted us to identify a set of online stores where price variation is more pronounced. We have focused on this subset, and performed a systematic crawl of their products and logged the prices obtained from different vantage points and browser configurations. By analyzing this dataset we see that there exist several retailers that return prices for the same product that vary by 10%-30% whereas there also exist isolated cases that may vary up to a multiplicative factor, e.g., x2. To the best of our efforts we could not attribute the observed price gaps to currency, shipping, or taxation differences.

3. Vision for Verifiable Network Function Outsourcing

Vyas Sekar, Stony Brook University

Network function outsourcing (NFO) can enable enterprises and small businesses to achieve the performance and security benefits offered by middleboxes (e.g., firewall, IDS) without incurring high equipment or operating costs that such functions entail. In order for this vision to fully take root, however, we argue that NFO customers must be able to verify that the service is operating as intended w.r.t.: (1) functionality (e.g., did the packets traverse the desired sequence of middlebox modules?); (2) performance (e.g., is the latency comparable to an ``in-house'' service?); and (3) accounting (e.g., are the CPU/memory consumption being accounted for correctly?). In this preliminary work, we attempt to formalize these requirements and outline a high-level roadmap to address the challenges involved.

4. The Transition to BGP Security: Is the Juice Worth the Squeeze?

Sharon Goldberg, Boston University

The Internet's interdomain routing system is notoriously insecure. After more than a decade of effort, we are finally seeing the initial deployment of the Resource Public Key Infrastructure (RPKI), which certifies IP address allocations using a centralized infrastructure of trusted authorities. To further improve security, standards bodies are developing BGPSEC, a protocol for certifying advertised routes. In this talk, I discuss the challenge of transitioning from legacy BGP to the RPKI and then to BGPSEC. I argue that transitioning to the RPKI is the most crucial step from a security perspective, but that it raises new technical and policy challenges. My argument is based on (1) our theoretical and experimental analysis of the security benefits of BGPSEC during the transition, when BGPSEC coexists alongside legacy insecure BGP, and (2) an analysis of the RPKI in a threat model where its trusted authorities are misconfigured, compromised, or compelled (e.g. by governments) to misbehave.

5. On Open IX and the future of public peering in the US

Walter Willinger, Niksun

Compared to Europe, the US peering ecosystem is well-known for its scarcity of public peering opportunities. While most routed networks in Europe can typically choose among multiple different types of interconnections and often do so in innovative and novel ways, their US counterparts have in general only a few options, and those options have hardly changed in the past 15+ years during which the Internet has changed in so many ways. “Open IX” refers to a recently announced effort to enrich the US peering ecosystem and make it comparable to its European counterpart by establishing a European-style Internet eXchange Point (IXP) model in the US. This talk is about the “who’s who in Open IX”, the different Internet stakeholders’ current attitudes towards public vs. private peering, and why the time may indeed be right to go for a radical overhaul of the US peering ecosystem.

6. Network neutrality tomography

Katerina Argyraki, EPFL

When can we reason about the neutrality of a network based on external observations? We prove sufficient conditions under which it is possible to (a) detect neutrality violations and (b) localize them to specific links, based on external observations. Our insight is that, if the network is not neutral, when we make external observations from different vantage points, these will most likely be inconsistent with each other. So, where existing tomographic techniques try to form solvable systems of equations to infer network properties, we try to form *un*solvable systems that reveal neutrality violations. We present an algorithm that relies on this idea to identify sets of non-neutral links based on external observations, and we show, through network emulation and a controlled testbed, that it achieves good accuracy for a variety of topologies and network conditions.

7. IP Geolocation without Active Probing

Bruce Maggs, Duke University and Akamai

This talk presents Alidade, an IP geolocation system that makes extensive use of available network measurement data, but does not issue any probes of its own. IP geolocation systems receive queries of the form, "Where is 128.2.205.42?" and answer with predictions such as, "Pittsburgh, PA". Like commercial geolocation systems, but unlike geolocation systems previously reported in the academic literature, Alidade precomputes predictions for all of IP space before receiving any queries, and answers queries immediately. The talk demonstrates that Alidade outperforms existing commercial systems on a large ground-truth data set.

8. Mapping the Expansion of Google’s Serving Infrastructure

Ramesh Govindan, University of Southern California

Modern content-distribution networks both provide bulk content and act as “serving infrastructure” for web services in order to reduce user-perceived latency. Serving infrastructures such as Google’s are now critical to the online economy, making it imperative to understand their size, geographic distribution, and growth strategies. To this end, we develop techniques that enumerate IP addresses of servers in these infrastructures, find their geographic location, and identify the association between clients and clusters of servers. While general techniques for server enumeration and geolocation can exhibit large error, our techniques exploit the design and mechanisms of serving infrastructure to improve accuracy. We use the EDNS-client-subnet DNS extension to measure which clients a service maps to which of its serving sites. We devise a novel technique that uses this mapping to geolocate servers by combining noisy information about client locations with speed-of-light constraints. We demonstrate that this technique substantially improves geolocation accuracy relative to existing approaches. We also cluster server IP addresses into physical sites by measuring RTTs and adapting the cluster thresholds dynamically. Google’s serving infrastructure has grown dramatically in the ten months, and we use our methods to chart its growth and understand its content serving strategy. We find that the number of Google serving sites has increased more than sevenfold, and most of the growth has occurred by placing servers in large and small ISPs across the world, not by expanding Google’s backbone.

9. Transforming Wi-Fi Into a Sensor

Shyam Gollakota, University of Washington

The last two decades has seen an exponential proliferation of Wi-Fi devices. Wi-Fi capability today is incorporated in a diverse set of devices such as smart phones, gaming consoles, and video players. In this talk, I will show how to leverage the ubiquity of Wi-Fi to enable rich sensing capabilities such as gesture recognition. Specifically, I will present WiSee, a novel gesture recognition system that leverages wireless signals to enable whole-home sensing and recognition of human gestures. Since wireless signals do not require line-of-sight and can traverse through walls, WiSee can enable whole-home gesture recognition using few wireless sources. Further, it achieves this goal without requiring instrumentation of the human body with sensing devices. Our proof-of-concept implementation shows that WiSee can identify and classify a set of nine gestures with an average accuracy of 94%.

10. Multi-tenant Resource Allocation For Shared Cloud Storage

Mike Freedman, Princeton University

Shared storage services enjoy wide adoption in commercial clouds. But few systems today provide performance isolation or resource allocation. Today’s approaches for multi-tenant resource allocation are based either on per-VM allocations or hard rate limits that assume uniform workloads to achieve high utilization.

In this talk, I first briefly discuss the Pisces framework for achieving datacenter-wide performance isolation and fairness for shared key-value storage (OSDI '12). Pisces achieves per-tenant weighted fair shares of aggregate system resources by decomposing the fair sharing problem into a combination of four complementary mechanisms -- partition placement, weight allocation, replica selection, and weighted fair queuing -- that combine to provide system-wide max-min fairness.

Although Pisces achieves resource allocations for network-bound workloads, such guarantees have proven elusive for disk-IO-bound workloads due to the unpredictable effects of IO amplification, IO interference, and IO performance in modern storage stacks. To address this issue, I describe our ongoing work on Libra, a local IO-scheduling framework for SSD-based storage. Libra uses a disk-IO cost model based on virtual IOPs (VOP) to determine the amount of provisionable resources available under interfering workloads. Libra also tracks per-request VOP consumption across the storage stack to allocate resources based on tenants’ dynamic usage profiles.