Peter Key
Publications

    2014

    • Chloe Brown, Christos Efstratiou, Ilias Leontiadis, Daniele Quercia, Cecilia Mascolo, James Scott, and Peter Key, The architecture of innovation: Tracking face-to-face interactions with ubicomp technologies, in Proceedings of UbiComp 2014, ACM – Association for Computing Machinery, 13 September 2014

      The layouts of the buildings we live in shape our everyday lives. In office environments, building spaces affect employees’ communication, which is crucial for productivity and innovation. However, accurate measurement of how spatial layouts affect interactions is a major challenge and traditional techniques may not give an objective view.

      We measure the impact of building spaces on social interactions using wearable sensing devices. We study a single organization that moved between two different buildings, affording a unique opportunity to examine how space alone can affect interactions. The analysis is based on two large scale deployments of wireless sensing technologies: short-range, lightweight RFID tags capable of detecting face-to-face interactions. We analyze the traces to study the impact of the building change on social behavior, which represents a first example of using ubiquitous sensing technology to study how the physical design of two workplaces combines with organizational structure to shape contact patterns.

    • Long Tran-Thanh, Lampros Stavrogiannis, Victor Naroditskiy, Valentin Robu, Nicholas R Jennings, and Peter Key, Efficient Regret Bounds for Online Bid Optimisation in Budget-Limited Sponsored Search Auctions, in uai2014, 30th Conf. on Uncertainty in AI, AUAI, July 2014

      We study the problem of an advertising agent who needs to intelligently distribute her budget across a sequence of online keyword bidding auctions. We assume the closing price of each auction is governed by the same unknown distribution, and study the problem of making provably optimal bidding decisions. Learning the distribution is done under censored observations, i.e. the closing price of an auction is revealed only if the bid we place is above it. We consider three algorithms, namely "First, Greedy ProductLimit (GPL) and LuekerLearn, respectively, and we show that these algorithms provably achieve Hannan-consistency. In particular, we show that the regret bound of "First is at most O(T 2) with high probability. For

      3 the other two algorithms, we first prove that, by using a censored data distribution estimator proposed by Zeng [19], the empirical distribution of the closing market price converges in probability to its true distribution with a O(p1) rate, where t is the number of updates. Basedt on this result, we prove pthat both GPL and LuekerLearn achieve O(T) regret bound with high probability. This in fact provides an affirmative answer to the research question raised in [1]. We also evaluate the abovementioned algorithms using real bidding data, and show that although GPL achieves the best performance on average (up to 90% of the optimal solution), its long running time may limit its suitability in practice. By contrast, LuekerLearn and "First proposed in this paper achieve up to 85% of the optimal, but with an exponential reduction in computational complexity (a saving up to 95%, compared to GPL).

    • Yoram Bachrach, Sofia Ceppi, Ian A. Kash, Peter Key, and David Kurokawa, Optimising Trade-offs Among Stakeholders in Ad Auctions, in EC'14, 15th ACM Conference on Economics and Computation, Stanford, CA, USA, ACM, June 2014

      We examine trade-offs among stakeholders in ad auctions. Our metrics are the revenue for the utility of the auctioneer, the number of clicks for the utility of the users and the welfare for the utility of the advertisers. We show how to optimize linear combinations of the stakeholder utilities, showing that these can be tackled through a GSP auction with a per-click reserve price. We then examine constrained optimization of stakeholder utilities.

      We use simulations and analysis of real-world sponsored search auction data to demonstrate the feasible trade-offs, examining the effect of changing the allowed number of ads on the utilities of the stakeholders. We investigate both short term effects, when the players do not have the time to modify their behavior, and long term equilibrium conditions.

      Finally, we examine a combinatorially richer constrained optimization problem, where there are several possible allowed configurations (templates) of ad formats. This model captures richer ad formats, which allow using the available screen real estate in various ways. We show that two natural generalizations of the GSP auction rules to this domain are poorly behaved, resulting in not having a symmetric Nash equilibrium or having one with poor welfare. We also provide positive results for restricted cases.

    • Frank Kelly, Peter Key, and Neil Walton, Incentivized Optimal Advert Assignment via Utility Decomposition, in EC'14, 15th ACM Conference on Economics and Computation, Stanford, CA, USA, ACM Electronic Commerce, June 2014

      We consider a large-scale Ad-auction where adverts are assigned over a potentially infinite number of searches. We capture the intrinsic asymmetries in information between advertisers, the advert platform and the space of searches: advertisers know and can optimize the average performance of their advertisement campaign; the platform knows and can optimize on each search instance; and, neither party knows the distribution of the infinite number of searches that can occur. We look at maximizing the aggregate utility of the click-through rates of advertisers subject to the matching constraints of online ad allocation. We show that this optimization can be decomposed into subproblems, which occur on timescales relevant to the platform or the advertisers respectively. The interpretation of the subproblems is that advertisers choose prices which are optimal given the average click-through rate they receive, that the platform allocates adverts according to a classical assignment problem per search impression and that prices satisfy a nominal complementary slackness condition. We then place this optimization result in a game-theoretic framework by assuming that advertisers bid strategically to maximize their net benefit. In this setting, we construct a mechanism with a unique Nash equilibrium that achieves the decomposition just described, and thus maximizes aggregate utility. This simple and implementable mechanism is as follows. When a search occurs, the platform allocates advertisement slots in order to maximize the expected bid from a click-throughs – this is a classical assignment problem. If an advert receives a click, the platform then solves the assignment problem a second time with the advertiser’s bid replaced by a bid which is uniformly distributed between zero and the original bid. The advertiser is then charged their bid minus a rebate. The rebate is the product of the advertiser’s bid and ratio of the advertiser’s click-through rate in the second assignment calculation (after a click-through) to the first assignment click-through rate (before the click-through). We demonstrate that, under the assignment and pricing mechanism just described, advertisers bidding strategically will maximize aggregate utility. The novelty of the mechanism just described is that, while maximizing utilitarian objective, it can be implemented by the platform in a strategic environment on the time-scales relevant to the platform (per-impression) and advertiser (on-average) respectively, and neither party requires information on the distribution of searches. We also show that dynamic models, where advertisers adapt their bids smoothly over time, will converge to the solution that maximizes aggregate utility.

    • Yoram Bachrach, Sofia Ceppi, Ian Kash, Peter Key, Filip Radlinski, Ely Porat, Michael Armstrong, and Vijay Sharma, Building A Personalized Tourist Attraction Recommender System Using Crowdsourcing, in AAMAS, 2014

      We demonstrate how crowdsourcing can be used to automatically build a personalized tourist attraction recommender system, which tailors recommendations to specific individuals, so different people who use the system each get their own list of recommendations, appropriate to their own traits. Recommender systems crucially depend on the availability of reliable and large scale data that allows predicting how a new individual is likely to rate items from the catalog of possible items to recommend. We show how to automate the process of generating this data using crowdsourcing, so that such a system can be built even when such a dataset is not initially available. We first find possible tourist attractions to recommend by scraping such information from Wikipedia. Next, we use crowdsourced workers to filter the data, then provide their opinions regarding these items. Finally, we use machine learning methods to predict how new individuals

      are likely to rate each attraction, and recommend the items with the highest predicted ratings.

    • Gideon Blocq, Yoram Bachrach, and Peter Key, The Shared Assignment Game and Applications to Pricing in Cloud Computing, in AAMAS, 2014

      We propose an extension to the Assignment Game in which sellers provide indivisible heterogeneous goods to their buyers. Each good takes up various amounts of resources and each seller has capacity constraints with respect to the total amount of resources it can provide. Hence, the total amount of goods that the seller can provide is dependent on the set of buyers. In this model, we fifirst demonstrate that the core is empty and proceed to suggest a fair allocation of the resulting utility of an optimal match, using the Shapley value. We then examine scenarios where the worth and resource demands of each good are private information of selfish buyers and consider ways in which they can manipulate the system. We show that such Shapley value manipulations are bounded in terms of the gain an agent can achieve by using them. Finally, since this model can be of use when considering elastic resource allocation and utility sharing in cloud computing domains, we provide simulation results which show our approach maximizes welfare and, when used as a pricing scheme, can also increase the revenue of the cloud server providers over what is achieved with the widely-used fixed pricing scheme.

    2013

    • Mahyar Salek, Yoram Bachrach, and Peter Key, Hotspotting – A Probabilistic Graphical Model For Image Object Localization, in AAAI-13, AAAI, July 2013

      Object localization is an image annotation task which consists of finding the location of a target object in an image. It is common to crowdsource annotation tasks and aggregate responses to estimate the true annotation. While for other kinds of annotations consensus is simple and powerful, it cannot be applied to object localization as effectively due to the task’s rich answer space and inherent noise in responses. We propose a probabilistic graphical model to localize objects in images based on responses from the crowd. We improve upon natural aggregation methods such as the mean and the median by simultaneously estimating the difficulty level of each question and skill level of every participant. We empirically evaluate our model on crowdsourced data and show that our method outperforms simple aggregators both in estimating the true locations and in ranking participants by their ability. We also propose a simple adaptive sourcing scheme that works well for very sparse datasets.

    • Ben Roberts, Ian Kash, and Peter Key, Ranking and Tradeoffs in Sponsored Search Auctions, in EC13: 14th ACM Conference on Electronic Commerce, ACM, June 2013

      In a sponsored search auction, decisions about how to rank ads impose tradeoffs between objectives such as revenue and welfare. In this paper, we examine how these tradeoffs should be made. We begin by arguing that the most natural solution concept to evaluate these tradeoffs is the lowest symmetric Nash equilibrium (SNE). As part of this argument, we generalize the well known connection between the lowest SNE and the VCG outcome. We then propose a new ranking algorithm, loosely based on the revenue-optimal auction, that uses a reserve price to order the ads (not just to filter them) and give conditions under which it raises more revenue than simply applying that reserve price. Finally, we conduct extensive simulations examining the tradeoffs enabled by different ranking algorithms and show that our proposed algorithm enables superior operating points by a variety of metrics

    • Jens Witkowski, Yoram Bachrach, Peter Key, and David C. Parkes, Dwelling on the Negative: Incentivizing Effort in Peer Prediction, in HCOMP, 2013

      Agents are asked to rank two objects in a setting where effort is costly and agents differ in quality (which is the probability that they can identify the correct, ground truth, ranking). We study simple output-agreement mechanisms that pay an agent in the case she agrees with the report of another, and potentially penalizes for disagreement through a negative payment. Assuming access to a quality oracle, able to determine whether an agent’s quality is above a given threshold, we design a payment scheme that aligns incentives so that agents whose quality is above this threshold participate and invest effort. Precluding negative payments leads the expected cost of this quality-oracle mechanism to increase by a factor of 2 to 5 relative to allowing both positive and negative payments. Dropping the assumption about access to a quality oracle, we further show that negative payments can be used to make agents with quality lower than the quality threshold choose to not to participate, while those above continue to participate and invest effort. Through the appropriate choice of payments, any design threshold can be achieved. This self selection mechanism has the same expected cost as the cost minimal

      quality-oracle mechanism, and thus when using the self-selection mechanism, perfect screening comes for free.

    2012

    • Kareem Amin, Michael Kearns, Peter Key, and Anton Schwaighofer, Budget Optimization for Sponsored Search:Censored Learning in MDPs, in Uncertainty in Artificial Intelligence (UAI), Uncertainty in Artificial Intelligence (UAI), August 2012

      We consider the budget optimization problem faced by an advertiser participating in repeated sponsored search auctions, seeking to maximize the number of clicks attained under that budget. We cast the budget optimization problem as a Markov Decision Process (MDP) with censored observations, and propose a learning algorithm based on the well-known Kaplan-Meier or product-limit estimator. We validate the performance of this algorithm by comparing it to several others on a large set of search auction data from Microsoft adCenter, demonstrating fast convergence to optimal performance.

    • Ramakrishna Gummadi, Peter Key, and Alexandre Proutiere, Optimal Bidding Strategies and Equilibria in Dynamic Auctions with Budget Constraints, 24 May 2012

      How should agents bid in repeated sequential auctions when they are budget constrained? A motivating example is that of sponsored search auctions, where advertisers bid in a sequence of generalized second price (GSP) auctions. These auctions, specifically in the context of sponsored search, have many idiosyncratic features that distinguish them from other models of sequential auctions: First, each bidder competes in a large number of auctions, where each auction is worth very little. Second, the total bidder population is often large, which means it is unrealistic to assume that the bidders could possibly optimize their strategy by modeling specific opponents. Third, the presence of a virtually unlimited supply of these auctions means bidders are necessarily expense constrained.

      Motivated by these three factors, we first frame the generic problem as a discounted Markov Decision Process for which the environment is independent and identically distributed over time. We also allow the agents to receive income to augment their budget at a constant rate. We first provide a structural characterization of the associated value function and the optimal bidding strategy, which specifies the extent to which agents underbid from their true valuation due to long term budget constraints. We then provide an explicit characterization of the optimal bid shading factor in the limiting regime where the discount rate tends to zero, by identifying the limit of the value function in terms of the solution to a differential equation that can be solved efficiently. Finally, we proved the existence of Mean Field Equilibria for both the repeated second price and GSP auctions with a large number of bidders.

    • Vineet Abhiskeh, Ian A Kash, and Peter Key, Fixed and Market Pricing for Cloud Services, in NetEcon, Full working paper version on arXiv http://arxiv.org/abs/1201.5621, NetEcon, March 2012

      This paper considers two simple pricing schemes for selling cloud instances and studies the trade-off between them. We characterize the equilibrium for the hybrid system where arriving jobs can choose between fixed or the market based pricing. We provide theoretical and simulation based evidence suggesting that fixed price generates a higher expected revenue than the hybrid system.

    • Reshef Meir, Moshe Tennenholtz, Yoram Bachrach, and Peter Key, Congestion Games with Agent Failures, in AAAI 2012, Association for the Advancement of Artificial Intelligence, 2012

      We propose a natural model for agent failures in congestion games. In our model, each of the agents may fail to participate in the game, introducing uncertainty regarding the set of active agents. We examine how such uncertainty may change the Nash equilibria (NE) of the game. We prove that although the perturbed game induced by the failure model is not always a congestion game, it still admits at least one pure Nash equilibrium. Then, we turn to examine the effect of failures on the maximal social cost in any NE of the perturbed game. We show that in the limit case where failure probability is negligible new equilibria never emerge, and that the social cost may decrease but it never increases. For the case of non-negligible failure probabilities, we provide a full characterization of the maximal impact of failures on the social cost under worst-case equilibrium outcomes.

    • Xi Alice Gao, Yoram Bachrach, Peter Key, and Thore Graepel, Quality Expectation-Variance Tradeoffs in Crowdsourcing Contests, in AAAI 2012, Association for the Advancement of Artificial Intelligence, 2012

      We examine designs for crowdsourcing contests, where participants compete for rewards given to superior solutions of a task. We theoretically analyze tradeoffs between the expectation and variance of the principal’s utility (i.e. the best solution’s quality), and empirically test our theoretical predictions using a controlled experiment on Amazon Mechanical Turk. Our evaluation method is also crowdsourcing based and relies on the peer prediction mechanism. Our theoretical analysis shows an expectation-variance tradeoff of the principal’s utility in such contests through a Pareto efficient frontier. In particular, we show that the simple contest with 2 authors and the 2-pair contest have good theoretical properties. In contrast, our empirical results show that the 2-pair contest is the superior design among all designs tested, achieving the highest expectation and lowest variance of the principal’s utility.

    2011

    • Bozidar Radunovic, Alexandre Proutiere, Dinan Gunawardena, and Peter Key, Dynamic Channel, Rate Selection and Scheduling for White Spaces, in ACM CoNEXT, ACM, December 2011

      We investigate dynamic channel, rate selection and scheduling for wireless systems which exploit the large number of channels available in the White-space spectrum. We first present measurements of radio channel characteristics from an indoor testbed operating in the 500 to 600MHz band and comprising 11 channels. We observe significant and unpredictable (non-stationary) variations in the quality of these channels, and demonstrate the potential benefit in throughput from tracking the best channel and also from optimally adapting the transmission rate. We propose adaptive learning schemes able to efficiently track the best channel and rate for transmission, even in scenarios with non-stationary channel condition variations. We also describe a joint scheduling scheme for providing fairness in an Access Point scenario. Finally, we implement the proposed adaptive scheme in our testbed, and demonstrate that it achieves significant throughput improvement (typically from 40% to 100%) compared to traditional fixed channel selection schemes.

    • R Gummadi, P B Key, and A Proutiere, Optimal Bidding Strategies in Dynamic Auctions with Budget Contstraints, in Forty-Ninth Annual Allerton Conference on Communication, Control, and Computing, IEEE, 30 September 2011

      We consider the problem of a bidder with limited budget competing in a series of second-price auctions. A motivating example is that of sponsored search auctions, where advertisers bid in a sequence of repeated generalized second price auctions. To characterize the optimal bidding strategy, we formulate the problem as a discounted Markov Decision Process, and provide explicit solutions when the bidder is involved in a large number of auctions.

    • Peter Key and Furcy Pin, Stochastic Variability in Sponsored Search Auctions: Observations and Models, in EC11: 12th ACM Conference on Electronic Commerce, ACM, June 2011

      Sponsored search advertisement slots are currently sold via Generalized Second Price (GSP) auctions. Despite the simplicity of their rules, these auctions are far from being fully understood. Our observations on real ad-auction data show that advertisers usually enter many distinct auctions with different opponents and with varying parameters. We describe some of our findings from these observations and propose a simple probabilistic model taking them into account. This model can be used to predict the number of clicks received by the advertisers and the total price they can expect to pay depending on their bid, or even to estimate the players valuations, all at a very low computational cost

    • Nikhil Singh, Dinan Gunawardena, Alexandre Proutiere, Bozidar Radunovic, Horia Vlad Balan, and Peter Key, Efficient and Fair MAC for Wireless Networks with Self-interference Cancellation, in WiOpt 2011, May 2011

      Recent advances in PHY layer design demonstrated efficient self-interference cancellation and full-duplex in a single band. Building a MAC that exploits self-interference cancellation is a challenging task. Links can be scheduled concurrently, but only if they either (i) don't interfere or (ii) allow for self-interference cancellation. Two issues arise: Firstly, it is difficult to construct a schedule that fully exploits the potentials for self-interference cancellation for arbitrary traffic patterns. Secondly, designing an efficient and fair distributed MAC is a daunting task; the issues become even more pronounced when scheduling under the constraints. We propose ContraFlow, a novel MAC that exploits the benefits of self-interference cancellation and increases spatial reuse. We use full-duplex to eliminate hidden terminals, and we rectify decentralized coordination inefficiencies among nodes, thereby improving fairness. Using measurements and simulations we illustrate the performance gains achieved when ContraFlow is used and we obtain both a throughput increase over current systems, as well as a significant improvement in fairness.

    • Bozidar Radunovic, Alexandre Proutiere, Dinan Gunawardena, and Peter Key, Exploiting Channel Diversity in White Spaces, no. MSR-TR-2011-53, 1 April 2011

      In a recent ruling, FCC has enabled an unlicensed use of the vacated analogue broadcast TV spectrum, called "White Spaces", provided that the unlicensed devices detect and avoid primary users (remaining TV channels and wireless MICs). The offered frequency space is very wide and has excellent propagation characteristics, hence the ruling offers a huge potential to develop high-quality, cheap and ubiquitous wireless access, and it also poses several novel design challenges (detecting primaries, network asymmetries, channel selections, etc). In this work we focus on the problem of channel and rate selections for white-spaces. We start by presenting measurements using an indoor testbed operating in the 500 to 600MHz band, and demonstrate that the key challenge is handling variable channel quality. The measurements show that there is significant potential benefit from selecting the best channel and adapting the transmission rate while also revealing non-stationary channel characteristics. The proposed learning algorithms are well suited to non-stationary environments and attempt to minimize "regret" with respect to a best or oracle-like algorithm. The novel aspects of the algorithms includes (i) accounting for the cost incurred when switching channels and synchronization costs, (ii) exploiting inherent correlations between the effective throughputs achieved when selecting the same channel but different transmission rates. We further extend the algorithms to account for fairness issues in network scenarios where several users are served by an access point.

    • Furcy Pin and Peter Key, Stochastic Variability in Sponsored Search Auctions: Observations and Models, no. MSR-TR-2011-40, 29 March 2011

      Sponsored search advertisement slots are currently sold via Generalized Second Price (GSP) auctions. Despite the simplicity of their rules, these auctions are far from being fully understood. Our observations on real ad-auction data show that advertisers usually enter many distinct auctions with different opponents and with varying parameters. We describe some of our findings from these observations and propose a simple probabilistic model taking them into account. This model can be used to predict the number of clicks received by the advertisers and the total price they can expect to pay depending on their bid, or even to estimate the players valuations, all at a very low computational cost.

    • Peter Key, Laurent Massoulie, and Don Towsley, Path selection and multipath congestion control, in Communications of the ACM, ACM, January 2011

      In this paper, we investigate the benefits that accrue from the use of multiple paths by a session coupled with rate control over those paths. In particular, we study data transfers under two classes of multipath control, coordinated control where the rates over the paths are determined as a function of all paths, and uncoordinated control where the rates are determined independently over each path. We show that coordinated control exhibits desirable load balancing properties; for a homogeneous static random paths scenario, we show that the worst-case throughput performance of uncoordinated control behaves as if each user has but a single path (scaling like log(log(N))/log(N) where N is the system size, measured in number of resources), whereas coordinated control yields a worst-case throughput allocation bounded away from zero. We then allow users to change their set of paths and introduce the notion of a Nash equilibrium. We show that both coordinated and uncoordinated control lead to Nash equilibria corresponding to desirable welfare maximizing states, provided in the latter case, the rate controllers over each path do not exhibit any round-trip time (RTT) bias (unlike TCP Reno). Finally, we show in the case of coordinated control that more paths are better, leading to greater welfare states and throughput capacity, and that simple path reselection polices that shift to paths with higher net benefit can achieve these states.

    2010

    • Bozidar Radunovic, Dinan Gunawardena, Peter Key, Alexandre Proutiere, Nikhil Singh, Vlad Balan, and Gerald Dejean, Rethinking Indoor Wireless Mesh Design: Low Power, Low Frequency, Full-duplex, in WiMesh (SECON Workshop), IEEE Communications Society, June 2010

      Existing indoor WiFi networks in the 2.5GHz and 5 GHz use too much transmit power, needed because the high carrier frequency limits signal penetration and connectivity. Instead, we propose a novel indoor wireless mesh design paradigm, based on Low Frequency, using the newly freed white spaces previously used as analogue TV bands, and Low Power – 100 times less power than currently used. Preliminary experiments show that this maintains a similar level of connectivity and performance to existing networks. It also yields more uniform connectivity, thus simplifies MAC and routing protocol design. We also advocate full-duplex networking in a single band, which becomes possible in this setting (because we operate at low frequencies). It potentially doubles the throughput of each link

      and eliminates hidden terminals.

    • Marshini Chetty, Richard Banks, Richard Harper, Tim Regan, Abigail Sellen, Christos Gkantsidis, Thomas Karagiannis, and Peter Key, Who's Hogging the Bandwidth?: The Consequences Of Revealing The Invisible In the Home, in CHI 2010, Association for Computing Machinery, Inc., 10 April 2010

      As more technologies enter the home, householders are burdened with the task of digital housekeeping—managing and sharing digital resources like bandwidth. In response to this, we created a domestic tool for bandwidth management called Home Watcher. Our field trial of Home Watcher showed that when resource contention amongst different household members is made visible, people's understanding of bandwidth changes and household politics are revealed. In this paper, we describe the consequences of showing real time resource usage in a home, and how this varies depending on the social make up of the household.

    • Yoram Bachrach, Peter Key, and Morteza Zadimoghaddam, Collusion in VCG Path Procurement Auctions, in WINE, 2010

      We consider collusion in path procurement auctions, where payments are determined using the VCG mechanism. We show that collusion can increase the utility of the agents, and in some cases they can extract any amount the procurer is willing to offer. We show that computing how much a coalition can gain by colluding is NP-complete in general, but that in certain interesting restricted cases, the optimal collusion scheme can be computed in polynomial time. We examine the ways in which the colluders might share their payments, using the core and Shapley value from cooperative game theory. We show that in some cases the collusion game has an empty core, so although beneficial manipulations exist, the colluders would find it hard to form a stable coalition due to inability to decide how to split the rewards. On the other hand, we show that in several common restricted cases the collusion game is convex, so it has a non-empty core, which contains the Shapley value. We also show that in these cases colluders can compute core imputations and the Shapley value in polynomial time.

    2009

    • Christos Gkantsidis, Thomas Karagiannis, Peter Key, Bozidar Radunovic, Elias Raftopoulos, and D. Manjunath, Traffic management and resource allocation in small wired/wireless networks, in Fifth ACM International Conference on emerging Networking EXperiments and Technologies (CoNEXT 2009) , Association for Computing Machinery, Inc., December 2009

      We consider the problem of traffic management in small networks with both wireless and wired devices, connected to the Internet through a single gateway. Examples of such networks are small office networks or residential networks, where typically traffic management is limited to flow prioritization through port-based filtering. We propose a practical resource allocation framework that provides simple mechanisms to applications and users to enable traffic management functionality currently not present due to the distributed nature of the system and various technology or protocol limitations. To allow for control irrespective of whether traffic flows cross wireless, wired or even broadband links, the proposed framework jointly optimizes rate allocations across wireless and wired devices in a weighted fair manner. Additionally, we propose a model for estimating the achievable capacity regions in wireless networks to overcome the absence of the queue information. This model is used by the controller to achieve a specific rate allocation. We evaluate a decentralized, host-based implementation of the proposed framework. The controller is incrementally deployable by not requiring modifications to existing network protocols and equipment or the wireless MAC. Using analytical methods and experimental results with realistic traffic, we show that our controller is stable with fast convergence for both UDP and TCP traffic, achieves weighted fairness, and mitigates scheduling inefficiencies of the existing hardware.

    • Peter Key and Alexandre Proutiere, Routing Games with Elastic Traffic , in Performance Evaluation Review, Association for Computing Machinery, Inc., September 2009

      In this paper, we introduce and investigate a novel class of multipath routing games with elastic traffic. Users open one or more connections along different feasible paths from source to destination and act selfishly— seeking to transfer data as fast as possible. Users only control their routing choices , and once these choices have been made, the connection rates are elastic and determined via congestion control algorithms (e.g. TCP) which ultimately maximize a certain notion of the network utility. We analyze the existence and the performance of the Nash Equilibria (NEs) of the resulting routing games.

    • Bozidar Radunovic, Dinan Gunawardena, Peter Key, Alexandre Proutiere, Nikhil Singh, Horia Vlad Balan, and Gerald Dejean, Rethinking Indoor Wireless: Low Power, Low Frequency, Full-duplex, no. MSR-TR-2009-148, 24 July 2009

      Existing indoor WiFi networks in the 2.5GHz and 5 GHz use too much transmit power, needed because the high carrier frequency limits signal penetration and connectivity. Instead, we propose a novel indoor wireless design paradigm, based on Low Frequency, using the newly freed white spaces previously used as analogue TV bands, and Low Power – 100 times less power than currently used. Preliminary experiments show that this maintains a similar level of connectivity and performance to existing networks. We also advocate full-duplex networking in a single band, which becomes possible in this setting (because we operate at low frequencies). It potentially doubles the throughput of each link and eliminates hidden terminals.

    • Gaurav Sharma, Ayalvadi Ganesh, and Peter Key, Performance Analysis of Contention Based Medium Access Control Protocols, in IEEE Trans. Information Theory, vol. 55, no. 4, pp. 1665 – 1682, IEEE, April 2009

      This paper studies the performance of contention based medium access control (MAC) protocols. In particular, a simple and accurate technique for estimating the throughput of the IEEE 802.11 DCF protocol is developed. The technique is based on a rigorous analysis of the Markov chain that corresponds to the time evolution of the back-off processes at the contending nodes. An extension of the technique is presented to handle the case where service differentiation is provided with the use of heterogeneous protocol parameters, as, for example, in IEEE 802.11e EDCA protocol. Our results provide new insights into the operation of such protocols. The techniques developed in the paper are applicable to a wide variety of contention based MAC protocols.

    • Bozidar Radunovic, Dinan Gunawardena, Alexandre Proutiere, Nikhil Singh, Vlad Balan, and Peter Key, Efficiency and Fairness in Distributed Wireless Networks Through Self-interference Cancellation and Scheduling, no. MSR-TR-2009-27, 12 March 2009

      Handling interference is one of the major challenges in the design of multi-user distributed wireless systems. In current systems, interference is managed through carrier sensing mechanisms such as CSMA/CA and through MAC algorithms based on random back-off. However, the asymmetry in channel sensing inevitably causes degraded throughput and fairness issues, such as those caused by hidden terminal problems. We propose ContraFlow, a solution based on self-interference cancellation and innovative scheduling mechanisms that increases spatial reuse, eliminates hidden terminals, and rectifies decentralized coordination inefficiencies among nodes, thereby improving fairness. Self-interference cancellation is a technique allowing a node to cancel its own transmitted signal and hence to successfully receive data while transmitting on the same channel. We demonstrate the feasibility of such techniques in a low power WPAN setting, using Lyrtech software-defined radios. Self-interference cancellation repairs carrier sensing, making it possible to successfully eliminate hidden terminal problems, even when using current multi-user MAC protocols; but it also provides the opportunity to design new distributed MAC scheduling algorithms that increase the spatial reuse and solve most of the fairness problems associated with current algorithms. We use simulations to illustrate the performance gains achieved when ContraFlow is used and we obtain both a throughput increase over current systems, as well as a significant improvement in fairness.

    • Bozidar Radunovic, Christos Gkantsidis, Peter Key, and Pablo Rodriguez, Toward Practical Opportunistic Routing with Intra-session Network Coding for Mesh Networks, in ACM/IEEE Transactions on Networking, IEEE/ACM Transactions on Networking, 2009

      We consider opportunistic routing in wireless mesh networks. We exploit the inherent diversity of the broadcast nature of wireless by making use of multi-path routing. We present a novel optimization framework for opportunistic routing based on network utility maximization (NUM) that enables us to derive optimal flow control, routing, scheduling, and rate adaptation schemes, where we use network coding to ease the routing problem. All previous work on NUM assumed unicast transmissions; however, the wireless medium is by its nature broadcast and a transmission will be received by multiple nodes. The structure of our design is fundamentally different; this is due to the fact that our link rate constraints are defined per broadcast region instead of links in isolation. We prove optimality and derive a primal-dual algorithm that lays the basis for a practical protocol. Optimal MAC scheduling is difficult to implement, and we use 802.11-like random scheduling rather than optimal in our comparisons. Under random scheduling, our protocol becomes fully decentralized (we assume ideal signaling). The use of network coding introduces additional constraints on scheduling, and we propose a novel scheme to avoid starvation. We simulate realistic topologies and show that we can achieve 20-200% throughput improvement compared to single path routing, and several times compared to a recent related opportunistic protocol (MORE).

    2008

    • Peter Key and Richard Gibbens, Coalition Games and Resource Allocation in Ad-Hoc Networks, in Bio-Inspired Computing and Communication (Biowire), vol. 5151/2008, pp. 387–398, Springer Verlag, Heidelberg, November 2008

      In this paper we explore some of the connections between cooperative game theory and the utility maximization framework for routing and flow control in networks. Central to both approaches are the allocation of scarce resources between the various users of a network and the importance of discovering distributed mechanisms that work well. The specific setting of our study is ad-hoc networks where a game-theoretic approach is particularly appealing. We discuss the underlying motivation for the primal and dual algorithms that assign routes and flows within the network and coordinate resource usage between the users. Important features of this study are the stochastic nature of the traffic pattern offered to the network and the use of a dynamic scheme to vary a user’s ability to send traffic. We briefly review coalition games defined by a characteristic function and the crucial notion of the Shapley value to allocate resources between players. We present a series of experiments with several test networks that illustrate how a distributed scheme of flow control and routing can in practice be aligned with the Shapley values which capture the influence or market power of individual users within the network.

    • Thomas Karagiannis, Christos Gkantsidis, Peter Key, Elias Athanasopoulos, and Elias Raftopoulos, HomeMaestro: Distributed monitoring and diagnosis of performance anomalies in home networks, no. MSR-TR-2008-161, 29 October 2008

      Home networks are comprised of applications running over multiple wired and wireless devices competing for shared network resources. Despite all the devices operating in a single administrative domain in such networks, applications operate independently, and users cannot express or enforce policies. By studying multiple households’ network performance at the packet-level correlated with diaries capturing user experiences, we show that the lack of cooperation across applications leads to observable performance problems and associated user frustration.

      We describe HomeMaestro, a cooperative host-based system that monitors local and global application performance, and automatically detects contention for network resources. HomeMaestro is designed to manage home and small networks and requires no modification to routers, access points, applications, or protocols. At each host, it transparently monitors per-flow and per-process network usage statistics, such as throughput, RTT, and loss rates. We propose novel

    • Bozidar Radunovic, Christos Gkantsidis, Dinan Gunawardena, and Peter Key, Horizon: Balancing TCP over Multiple Paths in Wireless Mesh Network, in MobiCom'08, Association for Computing Machinery, Inc., 14 September 2008

      There has been extensive work on network architectures that support multi-path routing to improve performance in wireless mesh networks. However, previous work uses ad-hoc design principles that cannot guarantee any network-wide performance objectives such as conjointly maximizing resource utilization and improving fairness. In parallel, numerous theoretical results have addressed the issue of optimizing a combined metric of network utilization and fairness using techniques based on back-pressure scheduling, routing and flow control. However, the proposed theoretical algorithms are extremely difficult to implement in practice, especially in the presence of the 802.11 MAC and TCP.

      We propose Horizon, a novel system design for multi-path forwarding in wireless meshes, based on the theoretical results on back-pressure. Our design works with an unmodified TCP stack and on top of the existing 802.11 MAC. We modified the backpressure approach to obtain a simple 802.11-compatible packetforwarding heuristic and a

    • Richard Mortier, Thomas Karagiannis, and Peter Key, Address and traffic dynamics in a large enterprise network, no. MSR-TR-2008-98, July 2008

      Despite the centrally-managed nature of and critical infrastructure provided by enterprise networks, analyses of their characteristics have been limited. In this paper we examine the dynamics of enterprise networks from two distinct perspectives, namely traffic and addressing. Using a large packet trace spanning approximately 3.5 weeks coupled with diverse other data sources, we pose and answer a series of questions pertinent to understanding the aforementioned aspects of today’s enterprise networks. Specifically, (i ) What is the network and geographical spread of traffic observed at a site in the enterprise network? (ii) Is it possible to infer application usage based on port numbers alone? (iii ) Is the client-server model valid within the enterprise, namely can we accurately distinguish clients and servers looking at traffic volumes alone? (iv ) How reliable is host identification by IP address or name alone? (v) What are the mobility patterns for hosts within an enterprise network? Finally, we discuss the implications of our findings for tasks such as modelling and monitoring. Although no single enterprise network could be considered typical, we believe that even a single datum is better than none at all, and that insight into these characteristics of our network is of interest to the networking research community, for whom such data is rarely accessible

    • Peter Key and Laurent Massoulié, Control of communication networks: welfare maximization and multipath transfers, in Philosophical Transactions of the Royal Society A, vol. 366, no. 1872, pp. 1955-1971, June 2008

      We discuss control strategies for communication networks such as the Internet. We advocate the goal of welfare maximization as a paradigm for network resource allocation. We explore the application of this paradigm to the case of parallel network paths. We show that welfare maximization requires active balancing across paths by data sources, and potentially requires implementation of novel transport protocols. However, the only requirement from the underlying `network layer' is to expose the marginal congestion cost of network paths to the `transport layer'. We further illustrate the versatility of the corresponding layered architecture by describing transport protocols with the following properties: they achieve welfare maximization when each communication may use an arbitrary collection of paths, available in an overlay, combined in series and parallel.We conclude by commenting on incentives, pricing and open problems.

    • Thomas Karagiannis, Elias Athanasopoulos, Christos Gkantsidis, and Peter Key, HomeMaestro: Order from Chaos in Home Networks, no. MSR-TR-2008-84, May 2008

      We present HomeMaestro, a distributed system for monitoring and instrumentation of home networks. HomeMaestro performs extensive measurements at the host level to infer application network requirements, and identifies network-related problems through time-series analysis. By sharing and correlating information across hosts in the home network, our system automatically detects and resolves contention over network resources among applications based on predefined policies. Finally, HomeMaestro implements a distributed virtual queue to enforce those policies by prioritizing applications without additional assistance from network equipment such as routers or access points. We outline the challenges in managing home networks, describe the design choices and architecture of our system, and highlight the performance of HomeMaestro components in typical home scenarios.

    • Peter Key, Laurent Massoulié, and Dan-Cristian Tomozei, Non-metric coordinates for predicting network proximity, in IEEE Infocom 2008 Minisymposium, IEEE, Phoenix, April 2008

      We consider the problem of determining the "closest", or best Internet host to connect to, from a list of candidate servers. Most existing approaches rely on the use of metric, or more specifically Euclidean coordinates to infer network proximity. This is problematic, given that network distances such as latency are known to violate the triangle inequality. This leads us to consider non-metric coordinate systems. We perform an empirical comparison between the "min-plus" non-metric coordinates and two metric coordinates, namely L-infinity and Euclidean. We observe that, when sufficiently many dimensions are used, min-plus outperforms metric coordinates for predicting Internet latencies. We also consider the prediction of "widest path capacity" between nodes. In this framework, we propose a generalization of min-plus coordinates. These results apply when node coordinates consist in measured network proximity to a random subset of landmark nodes. We perform empirical validation of these results on widest path bandwidth between PlanetLab nodes. We conclude that appropriate non-metric coordinates such as generalized min-plus systems are better suited than metric systems for representing the underlying structure of Internet distances, measured either via latencies or bandwidth.

    • Bozidar Radunovic, Christos Gkantsidis, Peter Key, and Pablo Rodriguez, An Optimization Framework for Opportunistic Multipath Routing in Wireless Mesh Networks, in INFOCOM Minisymposium, 2008
    • H. Seferoglu (UC Irvine), A. Lakshmikantha (UIUC), A. Ganesh (University of Bristol), and Peter Key, Dynamic decentralized multi-channel MAC protocols, in Information Theory and Applications (ITA 08), Institute of Electrical and Electronics Engineers, Inc., UCSD, January 2008

      this paper, we propose new dynamic decentralized multi-channel multiple access (MAC) protocols and study their performance. Our protocols build on slotted Aloha, but extend it in several ways to improve flow completion time and throughput, as follows: (i) channels are assigned to flows rather than packets to eliminate per packet collisions, thus the total number of collisions is reduced, and (ii) each flow owns or drops channels dynamically considering successful transmissions, thus the number of owned channels adapts to varying traffic. We present an analysis of the stability region and of flow completion times, for our algorithms, and show that one of them can achieve close to 100% throughput if flow sizes are large. We demonstrate by extensive simulations that, compared to current multi-channel MAC protocols, our algorithms improve flow completion time and throughput in wireless local area and mesh networks.

    2007

    • Christos Gkantsidis, Wenjun Hu, Peter Key, Bozidar Radunovic, Pablo Rodriguez, and Steluta Gheorghiu, Multipath Code Casting for Wireless Mesh Networks, in CoNext 2007, ACM, New York, December 2007

      Designing high throughput wireless mesh networks involves solving interrelated scheduling, routing, and interference problems. In this paper, we exploit the broadcast properties and the path diversity of wireless meshes to implement an efficient multipath routing protocol, Multipath Code Casting (MC2). In contrast to prior work in opportunistic routing, which required strong coordination across nodes to prevent information repetition, our design is based on network coding and does not require node coordination. Moreover, it provides a unified framework to deal with data transmissions across multiple and, often, unreliable transmission paths. Our design also includes a novel rate-scheduling algorithm that guarantees (proportionally) fair allocation of resources across multiple (multipath) flows, ensures that data use the paths with the best performance, and prevents information overflow by controlling the data rate across each path. Using simulations and a prototype implementation, we show that our algorithms provide over 30% performance improvement compared to traditional singlepath approaches when applied to realistic and other exemplar topologies; in some scenarios, our approach can even double the throughput. Our approach also performs better than 20% compared to other multipath routing schemes.

    • Christos Gkantsidis, Bozidar Radunovic, Peter Key, Steluta Gheorgiu, Wenjun Hu, and Pablo Rodriguez, Multipath Code Casting for Wireless Mesh Networks, no. MSR-TR-2007-67, June 2007

      Designing high throughput wireless mesh networks is a challenge, and involves solving interrelated scheduling, routing, and interference problems. In this paper, we exploit the fundamental properties of broadcast medium and path diversity in wireless meshes to implement multipath routing between a source and destination pair. We use network coding for a given unicast source-destination flow to ease the scheduling problem, exploit diversity, and deal with unreliable transmissions. We describe multipath-forwarding algorithms, and show their performance benefits over existing proposals, using simulation, analysis, and a prototype implementation on a small testbed. We propose a rate-scheduling protocol that relies on network coding, which gives over 30% performance improvement for a realistic topology and can double the throughput in certain cases.

    • Bozidar Radunovic, Christos Gkantsidis, Peter Key, Pablo Rodriguez, and Wenjun Hu, An optimization framework for practical multipath routing in wireless mesh networks, no. MSR-TR-2007-81, June 2007

      We consider wireless mesh networks, and exploit the inherent broadcast nature of wireless by making use of multipath routing. We present an optimization framework that enables us to derive optimal flow control, routing, scheduling, and rate adaptation schemes, where we use network coding to ease the routing problem. We prove optimality and derive a primal-dual algorithm that lays the basis for a practical protocol. Optimal MAC scheduling is difficult to implement, and we use random scheduling rather than optimal for comparisons. Under random scheduling, our protocol becomes fully decentralised. We use simulation to show on realistic topologies that we can achieve 20-200% throughput improvement compared to single path routing, and several times compared to a recent related opportunistic protocol (MORE).

    • Peter Key, Laurent Massoulié, and Don Towsley, Path Selection and Multipath Congestion Control, in Proc. IEEE Infocom 2007, IEEE, May 2007

      In this paper we investigate the potential benefits of coordinated congestion control for multipath data transfers, and contrast with uncoordinated control. For static random path selections, we show the worst-case throughput performance of uncoordinated control behaves as if each user had but a single path (scaling like log(log(N))/log(N) where N is the system size, measured in number of resources). Whereas coordinated control gives a throughput allocation bounded away from zero, improving on both uncoordinated control and on the greedy-least loaded path selection of e.g. Mitzenmacher. We then allow users to change their set of routes and introduce the notion of a Nash equilibrium. We show that with RTT bias (as in TCP Reno), uncoordinated control can lead to inefficient equilibria. With no RTT bias, both uncoordinated or coordinated Nash equilibria correspond to desirable welfare maximising states. Moreover, simple path reselection polices that shift to paths with higher net benefit can find these states.

    • Peter Key, Laurent Massoulié, and Don Towsley, Multipath Routing, Congestion Control and Load Balancing, in ICASSP 2007, April 2007

      Combining transport-layer congestion control with multi-path routing is a cross-layer approach that provides performance benefits over treating the layers separately. We phrase this as an optimisation problem, examine the case of data transfers, and show how a coordinated controller gives strictly better performance than an uncoordinated controller, which sets up parallel paths. For fixed demands, and the case of random-path selection, we show how coordinated control also achieves better load balancing than greedy least-loaded path selection. We then comment on adaptive path selection.

    2006

    • Peter Key and Laurent Massoulié, Fluid models of integrated traffic and multipath routing, in Queueing Systems, vol. 53, no. 1, pp. 85–98, June 2006

      In this paper we consider a stochastic model describing the varying number of flows in a network. This model features flows of two types, namely file transfers (with fixed volume) and streaming traffic (with fixed duration), and extends the model of Key, Massoulié, Bain and Kelly [27] by allowing more general bandwidth allocation criteria. We analyse the dynamics of the system under a fluid scaling, and show Lyapunov stability of the fluid limits under a natural stability condition. We provide natural interpretations of the fixed points of these fluid limits. We then compare the fluid dynamics of file transfers under (i) balanced multipath routing and (ii) parallel, uncoordinated routing. We show that for identical traffic demands, parallel uncoordinated routing can be unstable while balanced multipath routing is stable. Finally, we identify multi-dimensional Ornstein-Uhlenbeck processes as second-order approximations to the first-order fluid limit dynamics.

    • A.J. Ganesh, P.B. Key, D. Polis, and R. Srikant, Congestion Notification and Probing Mechanisms for Endpoint Admission Control, in IEEE/ACM Transactions on Networking, vol. 14, no. 3, pp. 568–578, June 2006

      There has been much interest in admission control schemes that place the burden of admission control decisions on the end users. In these schemes, referred to as Endpoint Admission Control, the decision to join the network is taken by the user, based on the probing of the network using probe packets. Depending on the level of congestion, routers mark the probe packets and thus inform the user of the state of the network. In this paper, we analyze three mechanisms for providing Endpoint Admission Control: virtual-queue marking, random-early marking and tail drop. For each scheme, we analyze the probing duration necessary to guarantee the required QoS and achieve high link utilization. Our main conclusion is that very few probe packets have to be sent when early marking is used, whereas tail drop requires a large number of probe packets.

    • A. Ganesh, D. Gunawardena, P. Key, L. Massoulié, and J. Scott, Efficient quarantining of scanning worms: optimal detection and coordination, in Infocom 2006, IEEE, April 2006

      Current generation worms have caused considerable damage, despite their use of unsophisticated scanning strategies for detecting vulnerable hosts. A number of adaptive techniques have been proposed for quarantining hosts whose behaviour is deemed suspicious. Such techniques have been proven to be effective against fast scanning worms. However, worms could evade detection by being less aggressive. In this paper we consider the interplay between worm strategies and detection techniques, which can be described in game-theoretic terms. We use epidemiological modelling to characterise the outcome of the game (the pay-off function), as a function of the strategies of the worm and the detector. We design detection rules that are optimal against scanning worms with known characteristics. We then identify specific detection rules that are close to optimal, in some mathematically precise sense, against any scanning worm. Finally, we design methods for coordinating information among a set of end-hosts, using Bayesian decision theory. We evaluate the proposed rules using simulations driven by traces from a corporate environment of 600 hosts, and assess the benefits of coordination.

    • Gaurav Sharma, Ayalvaid Ganesh, and Peter Key, Performance Analysis of Contention Based Medium Access Control Protocols, in Infocom 2006, IEEE, April 2006
    • Laurent Massoulié and Peter Key, Schedulable regions and equilibrium cost for multipath flow control: the benefits of coordination, in CISS 2006, 40th Conference on Information Sciences and Systems, Institute of Electrical and Electronics Engineers, Inc., March 2006

      In this paper we consider deterministic differential equation models for the varying number of flows in a network. These arise naturally as limits of stochastic models under joint scaling of flow arrival rates and network capacities. We compare these dynamics under (i) coordinated multipath routing and (ii) parallel, uncoordinated routing. We show that for identical traffic demands, parallel uncoordinated routing can be unstable while balanced multipath routing is stable. In other words, coordination can strictly increase the schedulable region, that is the set of demand vectors for which the system is stable. We also show that, even when uncoordinated multipath routing stabilises the system, coordination can bring further benefits, as it

    • Peter Key, Laurent Massoulié, and Don Towsley, Combining Multipath Routing and Congestion Control for Robustness, in CISS 2006, 40th Conference on Information Sciences and Systems, Institute of Electrical and Electronics Engineers, Inc., March 2006

      Flexible routing schemes mitigate some of the problems associated with uncertain traffic patterns and workloads by making the exact location of capacity less important: if there is available capacity the routing scheme will find it. In this paper we propose a combined multipath routing and congestion control architecture that can provide performance improvements to the end user and simplifies network dimensioning for operators. We describe a flow-level model, able to handle streaming and file transfer traffic, with stochastic arrivals, and look at a fluid limit. We describe a congestion controller and path selection algorithm that automatically balances traffic across the lowest cost paths, and we suggest ways in which just two paths may be used, with a random selection policy. A notable feature of a multipath congestion controller is that it cannot be tuned to a single RTT

    2005

    • Peter Key, Laurent Massoulie, and Don Towsley, Combined Multipath Routing and Congestion Control: a Robust Internet Architecture, no. MSR-TR-2005-111, August 2005

      Network management is complicated by uncertain traffic patterns and workloads. Flexible routing schemes mitigate some of the problems by making the exact location of capacity less important: if there is available capacity the routing scheme will find it. In this paper we propose a combined multipath routing and congestion control architecture that gives performance improvements to the end user and simplifies network dimensioning. We advocate multihoming and stepping stone routers to provide path diversity, and a congestion controller and path selection algorithm that automatically balances traffic across the lowest cost paths. A notable feature of a multipath congestion controller is that it cannot be tuned to a single RTT, hence it differs from standard TCP with respect to RTT bias. Scalability of the architecture results from implementing the algorithms at end-systems. We illustrate on network topologies of interest the performance impact of our architecture: active use of two paths can (i) halve response times and (ii) double the load that a network can carry.

    • Peter Key and Laurent Massoulié, Fluid Limits and Diffusion Approximations for Integrated Traffic Models, no. MSR-TR-2005-83, June 2005

      In this paper we consider a stochastic model describing the varying number of flows in a network. This model features flows of two types, namely file transfers (with fixed volume) and streaming traffic (with fixed duration), and extends the model of Key, Massoulié, Bain and Kelly by allowing more general bandwidth allocation criteria. We analyse the dynamics of the system under a fluid scaling, and show Lyapunov stability of the fluid limits under a natural stability condition. We provide natural interpretations of the fixed points of these fluid limits. We then compare the fluid dynamics of file transfers under (i) balanced multipath routing and (ii) parallel, uncoordinated routing. We show that for identical traffic demands, parallel uncoordinated routing can be unstable while balanced multipath routing is stable. Finally, we identify multi-dimensional Ornstein-Uhlenbeck processes as second-order approximations to the first-order fluid limit dynamics.

    • Ayalvadi Ganesh Supratim Deb and Peter Key, Resource allocation between persistent and transient flows, in IEEE/ACM Transactions on Networking, vol. 13, no. 2, pp. 302–315, IEEE/ACM Transactions on Networking, April 2005

      The flow control algorithms currently used in the Internet have been tailored to share available capacity between users on the basis of the physical characteristics of the network links they use rather than the characteristics of their applications. However, real-time applications typically have very different requirements from file transfer or Web browsing, and treating them identically can result in a perception of poor quality of service even when adequate bandwidth is available. This is the motivation for differentiated services. In this paper, we explore service differentiation between persistent (fixed duration) and transient (fixed volume) flows, and also between transient flows of markedly different sizes; the latter is stimulated by current discussion on Web mice and elephants. We propose decentralized bandwidth allocation algorithms that can be implemented by end-systems without requiring the support of a complex network architecture, and show that they achieve performance very close to what is achievable by the optimal centralized scheme.

    2004

    • Manuel Costa, Miguel Castro, Antony Rowstron, and Peter Key, PIC: Practical Internet Coordinates for Distance Estimation, in Proceedings of the 24th International Conference on Distributed Computing Systems (ICDCS'04), March 2004

      This paper introduces PIC, a practical coordinate-based mechanism to estimate Internet network distance (i.e., round-trip delay or network hops). Network distance estimation is important in many applications, for example, network-aware overlay construction and server selection. There are several proposals for distance estimation in the Internet but they all suffer from problems that limit their benefit. Most rely on a small set of infrastructure nodes that are a single point of failure and limit scalability. Others use sets of peers to compute coordinates but these coordinates can be arbitrarily wrong if one of these peers is malicious. While it may be reasonable to secure a small set of infrastructure nodes, it is unreasonable to secure all peers. PIC addresses these problems: it does not rely on infrastructure nodes and it can compute accurate coordinates even when some peers are malicious. We present PIC’s design, experimental evaluation, and an application to network-aware overlay construction and maintenance.

    • P. Key, L. Massoulié, and B. Wang, Emulating Low-Priority Transport at the Application Layer: a Background Transfer Service, in ACM Sigmetrics 04, ACM, January 2004

      Low priority data transfer across the wide area is useful in several contexts, for example for the dissemination of large files such as OS updates, content distribution or prefetching. Although the design of such a service is reasonably easy when the underlying network supports service differentiation, it becomes more challenging without such network support. We describe an application level approach to designing a low priority service – one that is 'lower than best-effort' in the context of the current Internet. We require neither network support nor changes to TCP. Instead, we use a receive window control to limit the transfer rate of the application, and the optimal rate is determined by detecting a change-point. We motivate this joint control-estimation problem by considering a fluid-based optimisation framework, and describe practical solutions, based on stochastic approximation and binary search techniques. Simulation results demonstrate the effectiveness of the approach.

    • P. Key, A. Bain, F. Kelly, and L. Massoulié, Fair Internet Traffic Integration: Network Flow Models and Analysis, in Annals of Telecommunications, January 2004

    2003

    • Peter Key, Laurent Massoulié, and Bing Wang, Network Aware Applications: A Background Transfer Service, in Proceedings of Forty-First Allerton Conference on Communication, Control, and Computing, October 2003
    • P. Key, L. Massoulié, A. Bain, and F. Kelly, A network flow model for mixtures of file transfers and streaming traffic, in 18th International Teletraffic Conference (ITC), Elsevier , June 2003

      Roberts, Massoulié and co-authors have introduced and studied a flow level model of Internet congestion control, that represents the randomly varying number of flows present in a network where bandwidth is dynamically shared between elastic file transfers. In this paper we consider a generalization of the model to include streaming traffic as well as file transfers, under a fairness assumption that includes TCP-friendliness as a special case. We establish stability, under conditions, for a fluid model of the system. We also assess the impact of each traffic type on the other: file transfers are seen by streaming traffic as reducing the available capacity, whereas for file transfers the presence of streaming traffic amounts to replacing sharp capacity constraints by relaxed constraints. The integration of streaming traffic and file transfers has a stabilizing effect on the variability of the number of flows present in the system.

    • Dinan Gunarwardena, Peter Key, and Laurent Massoulié, Network characteristics: modelling, measurements and admission control, in IWQoS, June 2003
    • P. Key and L. Massoulié, Probing Strategies for Distributed Admission Control in Large and Small Scale Systems, in IEEE INFOCOM, IEEE, April 2003

      The aim of this article is to propose and analyse measurement-based admission control schemes. We distinguish between large-scale and small-scale systems, where scale is measured in the number of concurrent applications that can run simultaneously. For large scale systems, we show that simple end-user probing strategies, based on ECN-type feedback provided by the network, achieve a good utilisation/quality trade-off. We explicitly take account of feedback delay, and use limiting results for assessing performance. We illustrate the benefits of using ECN-type feedback rather than relying on loss. For small-scale systems, the previous strategies are no longer adequate and we propose alternative, more gradual probing strategies.

    2002

    • Peter Key, Laurent Massoulié, and Jonathan Shapiro, Service-Differentiation for Delay-Sensitive Applications: An Optimisation-Based Approach, in Performance Evaluation, vol. 49, pp. 471 – 489, Elsevier , September 2002

      Abstract This paper deals with the performance of delay-sensitive applications running over a network that offers multiple classes of service, where the adaption of application rates in response to network feedback is the primary mechanism available for controlling quality of service. We first evaluate the gain in utilisation allowed by the introduction of several classes of service. To this end we compare the pairs of achievable rates, or schedulable regions, for two types of applications with two distinct delay requirements that make use of a single resource, with either no differentiation, simple priority-based differentiation, or earliest-deadline-first (EDF) scheduling-based differentiation. The main observations are that the gain achieved by differentiation is essentially affected by traffic burstiness, and that the two differentiation schemes yield very similar performance. We then consider what feedback information should be sent to traffic sources from different classes, casting the problem in the framework of optimisation-based congestion control. We establish a connection between the sample-path shadow price rationale for feedback synthesis and the rare perturbation analysis technique for gradient estimation in discrete event systems theory. Based on this connection, we propose several marking schemes, for simple priority-based differentiation with a measure of cost based on loss or delay, and also for EDF-based differentiation with loss-based cost. The interaction of these marking algorithms with simple congestion control algorithms is studied via simulations

    • J. Shapiro, P. Key, and L. Massoulié, Service-Differentiation for Delay-Sensitive Applications: An Optimisation-Based Approach, in IFIP Performance 2002, January 2002
    • Alan Bain and Peter Key, Modelling the Performance of In-Call Probing for Multi-Level Adaptive Applications, no. MSR-TR-2002-06, January 2002

      This paper looks at adaptive applications that can switch between a small number of different levels of service, with switching decisions made solely by the originating end-system. Typical of such applications are real time streaming protocols which can use different coding rates. The end-systems probe the network with sample traffic to determine congestion, and decide at what rate to enter according to the fate of their "probe" packets. During the lifetime of a connection, the procedure is repeated to reassess and possibly readjust the rate. We derice analytic models, based on diffusion limits under a natural scaling, to quantify the benefits of in-call probing. We then use simulation to compare the results in a number of scenarios, and show that this simple theory is remarkably accurate in predicting large-scale behaviour. The results also show that a small amount of in-call probing produces significant benefits to the system.

    • S. Deb, A. Ganesh, and P. Key, Resource allocation with persistent and transient flows, in Networking 2002, Springer, January 2002

      The flow control algorithms currently used in the Internet have been tailored to share bandwidth between users on the basis of the physical characteristics of the network links they use rather than the characteristics of their applications. This can result in a perception of poor quality of service by some users even when adequate bandwidth is potentially available, and is the motivation for seeking to provide differentiated services. In this paper, stimulated by current discussion on Web mice and elephants, we explore service differentiation between persistent and short-lived flows, and between file transfers of different sizes. In particular, we seek to achieve this using decentralized algorithms that can be implemented by end-systems without requiring the support of a complex network architecture. The algorithms we propose correspond to a form of weighted processor sharing and can be tailored to approximate the shortest remaining processing time service discipline.

    2001

    • A. Bain and P. B Key, Modelling the Performance of Distributed Admission Control for Adaptive Applications, in ACM SIGMETRICS Performance Evaluation Review, vol. 29, no. 3, pp. 21–22, ACM Press, December 2001
    • Peter Key, L. Massoulie, and Jonathan Shapiro, Service Differentiation for Delay-Sensitive Applications: An Optimization-Based Approach, no. MSR-TR-2001-115, November 2001

      This paper deals with the performance of delay-sensitive applications running over a network that offers multiple classes of service. We first discuss the gain in utilisation allowed by the introduction of several classes of service. To this end we compare the pairs of achievable rates, or schedulable regions , for two types of applications with two distinct delay requirements that make use of a single resource, with either no differentiation, simple priority-based differentiation, or earliest-deadline-first scheduling-based differentiation. The main observations are that the gain achieved by differentiation is essentially affected by traffic burstiness, and that the two differentiation schemes yield very similar performance. We then discuss what feedback information should be sent to traffic sources from different classes, casting the problem in the framework of optimisation-based congestion control. We establish a connection between the sample-path shadow price rationale for feedback synthesis and the rare perturbation analysis technique for gradient estimation in discrete event systems theory. Based on this connection, we propose several marking schemes, for simple priority-based differentiation with a measure of cost based on loss of delay, and also for earliest-deadline-first-based differentiation with loss-based cost. The interaction of these marking algorithms with simple congestion control algorithms is studied via simulations.

    • P. Kuusela, P. Lassila, J. Virtamo, and P. Key, Modeling RED with Idealized TCP Sources, in Proceedings of IFIP ATM & IP 2001, June 2001
    • Richard Gibbens and Peter Key, Distributed control and resource marking using best-effort routers, in IEEE Network, vol. 15, no. 3, pp. 54 –59, Institute of Electrical and Electronics Engineers, Inc., June 2001

      We present a method for creating differential QoS where control is in the hands of the end system or user, and the network distributes congestion feedback information to users via packet marking at resources. Current proposals for creating differential QoS in the Internet often rely on classifying packets into a number of classes with routers treating different classes appropriately. The router plays a critical role in guaranteeing performance. In contrast, there is a growing body of work that seeks to place more of the control in the hands of the end system or user, with simple functionality in the router. This is the approach outlined in this tutorial article: using insights from economics and control theory we show how cooperation between end systems and the network can be encouraged using a simple packet marking scheme. The network distributes congestion feedback information to users via packet marking at resources, and users react accordingly to obtain differential QoS

    • Alan Bain and Peter B. Key, In-call Probing and End-to-End Congestion Control: Theory and Performance, February 2001
    • A. Ganesh, P. Key, and L. Massoulié, Feedback and bandwidth sharing in networks, in Proc. 39th Annual Allerton Conference on Communications, Control and Computing, January 2001
    • Peter B. Key, Resource Pricing for Differentiated Services, in Kommunication in Verteilten Systemen, (KiVS), Springer, January 2001
    • Richard J. Gibbens, Peter B. Key, and Stephen R. E. Turner, Properties of the Virtual Queue Marking Algorithm, in 17th UK Teletraffic Symposium, IEE, January 2001

    2000

    • F. P. Kelly, P. B. Key, and S. Zachary, Distributed Admission Control, in IEEE Journal on Selected Areas in Communications, vol. 18, no. 12, December 2000

      This paper describes a framework for admission control for a packet-based network where the decisions are taken by edge devices or end-systems, rather than resources within the network. The decisions are based on the results of probe packets that the end-systems send through the network, and require only that resources apply a mark to packets in a way that is load dependent. One application example is the Internet, where marking information is fed back via an ECN bit, and we show how this approach allows a rich QoS framework for flows or streams. Our approach allows networks to be explicitly analyzed, and consequently engineered

    • Koenraad Laevens, Peter B. Key, and Derek McAuley, An ECN-based end-to-end congestion-control framework: experiments and evaluation, no. MSR-TR-2000-104, October 2000

      In this paper we describe a study of an end-to-end architecture in a packet network where congestion is signalled to all contributing users, who then react accordingly. We assume a generic form of packet marking, where congested nodes set a single bit in the packets flowing them using ECN (Explicit Congestion Notification). We briefly describe how allowing greater freedom to the end-systems lays the foundation for a differential QoS framework as a possible future extension to ECN. Not surprisingly, this QoS is a function of the user-network interaction. Focussing on a specific class of user behaviors, related to streaming applications, we decompose this interaction at the relevant timescales, relying both on simulations and dynamical system models. Results indicate that within the framework, single-bit notification achives end-to-end QoS differentiation, approaching proportionally fair sharing of bandwidth.

    • Laurent Massoulié, Peter B. Key, and Koenraad Laevens, End-User Policies for Predicting Congestion Patterns in Data Networks, in 13th ITC Specialist Seminar on Internet Traffic Measurement, September 2000

    1999

    • R. J. Gibbens and P. B. Key, The use of games to assess user strategies for differential Quality of Service in the Internet, in Workshop on Internet Service Quality Economics, December 1999

      The use of games to assess user strategies for differential Quality of Service in the Internet

    • P. B. Key and L. Massoulié, User Policies in a Network Implementing Congestion Pricing, in Workshop on Internet Service Quality Economics, December 1999

      In this paper we consider a network implementing congestion pricing. For general user utility functions, and in a regime where user peak rates are small compared to network link capacities, we establish optimality of the considered pricing scheme. The corresponding optimal user policies are perhaps contrary to what one would expect: file transfers must either be done at maximal speed, or have access denied, whereas real time applications will display elasticity in their choice of sending rates. We also discuss how the optimality property is affected by price encoding mechanisms implemented in the network, and the resulting effect on user policies.

    • P. B. Key and D. R. McAuley, Differential QoS and pricing in networks: where flow control meets game theory, in IEE Proceedings Software, vol. 146, no. 2, pp. 39–43, Institute of Electrical and Electronics Engineers, Inc., March 1999

      This paper looks at ways of providing quality of service (QoS) to users, based on a simple pricing scheme. It is primarily aimed at elastic traffic, and it is users rather than the network who define the flow control schemes. A framework for assessing schemes and algorithms via a distributed game is presented

    • Peter B. Key, Derek R. McAuley, Paul Barham, and Koenraad Laevens, Congestion Pricing for Congestion Avoidance, no. MSR-TR-99-15, January 1999

      This paper describes the use of Congestion Pricing as a means of providing Congestion Control and Differentiated Quality of Service. The application of the proposed technique to the Internet Protocol has the advantage that it can be simply implemented using Explicit Congestion Notification.

    • R.J Gibbens and P.B. Key, A note on resource pricing and congestion control for networks with delay and loss, January 1999
    • Peter B. Key, Service Differentiation: Congestion Pricing, Brokers and Bandwidth Futures, in NOSSDAV'99, January 1999

      In most Networks or Systems, users compete for a set of limited resources. Resources might be point-to-point bandwidth, buffer space, memory, CPU cycles etc. and users can be humans, applications or processes. Connection-oriented Telecoms services use either reservation, where resources are booked in advanced or Admission Control — think of telephony or ATM for example. Packet-based services and Operating Systems rely mainly on either priority schemes, for instance using simple priority queues, or flow-control feedback mechanisms such as TCP. The current Internet uses flow-control with an admit-all policy that relies on secondary measures, such as users’ willingness to tolerate current service, to throttle excess load. Within the IETF, IntServ is attempting to define connection-oriented admission control schemes whilst DiffServ is based on connectionless ideas, attempting to use priorities and complex queuing disciplines to provide QoS classes. But if we step outside the Computer Science or Telecommunication mindset, we see the same problem occurring in other areas, but solved using resource pricing as a fundamental ingredient. For example, think of resources as airline seats, units of (electrical) power or economic wealth. Prices or taxes are used to control the load, or to maximise the return by offering differential services at different prices — think for instance of power companies who use price incentives and penalties, or airlines. We claim not only that it is impossible to separate service differentiation from pricing, but also that some use of resource pricing is the only sensible way to proceed. The first point is clear: if there are two qualities which both cost the same, any rational person will chose the higher quality! As an example, most current DiffServ proposals are interesting but fundamentally flawed, caused by attempting to define technologies independently of economics or implementation. They attempt to define local priority schemes and scheduling policies, whereas an application/user is interested in end-to-end performance. Also, the cost and complexity of scheduling policies is not addressed. Even if the hardware cost is not significant, there are real management overheads. A salutary example is provided by the ABR service of ATM: this only really makes sense if ABR operates from end-system to end-system, which now looks highly unlikely. Moreover, despite being more complex to implement, it would have to be charged at a cheaper tariff than say a CBR service operating at the MCR, despite using the same bandwidth. (Otherwise a smart user with delay insensitive traffic could just use CBR and overflow excess traffic to a best-effort traffic class that has be to an order of magnitude cheaper). The Explicit Congestion Notification (ECN) [ ] proposal within DiffServ looks attractive, and we believe can be extended to give as much diversity as is required. Our contribution is to break away from shackles of mandatory TCP-like behaviour inherent in the current ECN RFC. We can even incorporate call admission control at the flow or micro-flow level, an anathema to the current DiffServ!

    1998

    • Peter Key, Fairness, Flow Control and Multi-User Games, January 1998

      Presentation regarding: Fairness, Flow Control and Multi-User Games.

    • P. B. Key and D. McAuley, Differential QoS and pricing in networks: where flow control meets game theory, in Proceedings UK Performance Engineering Workshop, January 1998

      This paper looks at ways of providing quality of service (QoS) to users, based on a simple pricing scheme. It is primarily aimed at elastic traffic, and it is users rather than the network who define the flow control schemes. A framework for assessing schemes and algorithms via a distributed game is presented

    1996

    1995

    • Peter B. Key, Connection Admission Control in ATM networks, in BT Technology Journal, vol. 13, no. 3, July 1995
    • P. B. Key, Admission Control Problems in Telecommunications, pp. 235–250, Oxford University Press, January 1995
    • P. B. Key, Admission Control Problems in Telecommunications, in Complex Stochastic Systems and Engineering, pp. 235–250, Oxford University Press, 1995
    • R. J. Gibbens, F. P. Kelly, and P. B. Key, A decision-theoretic approach to call admission control in ATM networks, in IEEE Journal on Selected Areas in Communications, special issue on Advances in the Fundamentals of Networking, vol. 13, no. 6, pp. 1101–1114, January 1995

      This paper describes a simple and robust ATM call admission control, and develops the theoretical background for its analysis. Acceptance decisions are based on whether the current load is less than a precalculated threshold, and Bayesian decision theory provides the framework for the choice of thresholds. This methodology allows an explicit treatment of the trade-offs between cell loss and call rejection, and of the consequences of estimation error. Further topics discussed include the robustness of the control to departures from the model assumptions, its performance relative to a control possessing precise knowledge of all unknown parameters, the relationship between leaky bucket depths and buffer requirements, and the treatment of multiple call types.

    • Simon Crosby, Ian Leslie, and Peter Key, Cell Delay Variation and Burst Expansion in ATM Networks: Results from a Practical Study Using Fairisle, January 1995
    • R. J. Gibbens, F. P. Kelly, and P. B. Key, Dynamic Alternative Routing, in Routing in Communication Networks, Prentice Hall, 1995

      Dynamic call-routing strategies, which route calls according to current networks state, have been widely deployed in telephone networks over the past decade because of their ability to yield improved network performance at reduce network cost. This chapter describes an ingenious example of this type of routing strategy, Dynamic Alternative Routing (DAR), currently employed by British Telecom. DAR is an adaptive call-routing strategy that stochastically selects an alternative route when a direct route is not available and uses local information about the loading of outgoing trunks to determine the feasibility of selected routes. Dynamic routing strategies, such as DAR, which operate in a decentralized fashion and rely only on local information to make call-routing decisions, are particularly attractive because they are robust in the presence of failures, they consume a minimun of network resources, and they are simple to implement

    1994

    • S. A. Crosby, I. M. Leslie, K. van der Merwe, A. Atkinson, R. Griffiths, and P. B. Key, CDV in ATM Networks — Performance Results from the Fairisle ATM Testbed, in RACE EXPLOIT Traffic Workshop, Basel, Switzerland, September 1994
    • P.B. Key, Some Control Issues in Telecommunications Networks, in In Probability, Statistics and Optimisation A tribute to Peter Whittle, pp. 383-395, Wiley, 1994
    • F. P. Kelly and P. B. Key, Dimensioning playout buffers from an ATM network, in Proceedings of the Eleventh U.K. Teletraffic Symposium, I.E.E., 1994

      The authors investigate how earlier work of van den Berg and Resing and of D'Ambrosio and Melen (1993) can be combined with classical bounds for the GI/G/1 queue to give insight into the output buffer required for a smooth playout of cells. The effect of both the number of stages across the network and the rate of the stream is considered. A possibly surprising conclusion is that as the rate of the stream decreases, the required output buffer size may actually increase. While the cell delay variation of small rate streams may cause no problems for buffers within the network, the cell delay variation introduced by the network may cause problems for the customer on output

    • P. B. Key and T. R. Griffiths, Adaptive Call Admission Control in ATM Networks, in The fundamental role of Teletraffic Engineering in the Evolution of Telecommunication Networks, Elsevier, 1994

    1991

    1990

    • P.B. Key and G.A. Cope, Distributed Dynamic Routing Schemes, in IEEE Communications Magazine, vol. 28, pp. 54-64, 1990

      Schemes that do not explicitly use much information about the state of networks are briefly surveyed, with the focus on dynamic alternative routing (DAR), a simple but highly effective routing method currently planned for the British Telecom Network. State-dependent routing and how some of the methodology also has bearing on the control issue are discussed. The problem of dimensioning a network that uses dynamic routing (i.e. how much capacity is needed and where it should be put to provide an acceptable performance) is addressed. A practical example, which refers to routing in an international access network, is discussed. Some conclusions are drawn on the benefits and drawbacks of distributed routing

    • P.B. Key, Optimal control and trunk reservation in loss networks, in Probability in the Engineering and Informational Sciences, vol. 4, pp. 203-242, Cambridge University Press, 1990

    1988

    1987

    1981