
I am a researcher in the theory group
in Microsoft Research, Redmond.
I am interested in what I call Automated Economics, which studies the question of how
technology can be used to improve the efficiency of economic systems.
My other interest is in Algorithms: I am interested in designing algorithms that are faster, simpler,
work online or in a distributed fashion, for some of the basic combinatorial optimization problems.
I am the Program Committee Chair of APPROX 2014 . Deadline to submit has passed. The list of accepted papers will be out on June 7th.
I am part of the team that is running a prediction market for
Indian elections, 2014 . Do try it out if you are interested in these elections or in prediction markets.
I organized a workshop on Online matching at STOC in Palo Alto on June 1st 2013.
I gave a tutorial on PriorRobust Optimization at EC in Philadelphia on June 16 2013.
I taught a course on online algorithms at UW this winter quarter, 2013.
Email:
nikdev[AT]microsoft[DOT]com


 A new auction format for IPL players auctions
[pdf].
 Draft Auctions
[pdf Show/Hide Abstract]
with Jamie Morgenstern and Vasilis Syrgkanis.
We introduce draft auctions, which is a sequential auction format where at each iteration players bid for the right to buy items at a fixed price. We show that draft auctions offer an exponential improvement in social welfare at equilibrium over sequential item auctions where predetermined items are auctioned at each time step. Specifically, we show that for any subadditive valuation the social welfare at equilibrium is an O(log^2(m))approximation to the optimal social welfare, where m is the number of items. We also provide tighter approximation results for several subclasses. Our welfare guarantees hold for BayesNash equilibria and for noregret learning outcomes, via the smoothmechanism framework. Of independent interest, our techniques show that in a combinatorial auction setting, efficiency guarantees of a mechanism via smoothness for a very restricted class of cardinality valuations, extend with a small degradation, to subadditive valuations, the largest complementfree class of valuations. Variants of draft auctions have been used in practice and have been experimentally shown to outperform other auctions. Our results provide a theoretical justification.

Fisher Markets and Convex Programs
[pdf Show/Hide Abstract].
Convex programming duality is usually stated in its most general form, with convex objective functions and convex constraints.
At this level of generality the process of constructing a dual of a convex program is not so simple, in contrast to LP duality where
there is a simple set of rules one can use for the same purpose.
In this article, we consider a special class of convex programs: with convex objective functions and {\em linear} constraints,
and derive a simple set of rules to construct the dual, analogous to LPs.
We also show applications of this: the main application in this article will be to derive convex programs for Fisher markets.

DBLP page

Papers on Arxiv
2014
 Bandits with concave rewards and convex knapsacks
[pdf  Show/Hide Abstract]
with Shipra Agrawal.
In Proc. ECM EC 2014
In this paper, we consider a very general model for explorationexploitation tradeoff which allows arbitrary concave rewards and convex constraints on the decisions across time, in addition to the customary limitation on the time horizon. This model subsumes the classic multiarmed bandit (MAB) model, and the Bandits with Knapsacks (BwK) model of Badanidiyuru et al.[2013]. We also consider an extension of this model to allow linear contexts, similar to the linear contextual extension of the MAB model. We demonstrate that a natural and simple extension of the UCB family of algorithms for MAB provides a polynomial time algorithm that has nearoptimal regret guarantees for this substantially more general model, and matches the bounds provided by Badanidiyuru et al.[2013] for the special case of BwK, which is quite surprising. We also provide computationally more efficient algorithms by establishing interesting connections between this problem and other well studied problems/algorithms such as the Blackwell approachability problem, online convex optimization, and the FrankWolfe technique for convex optimization. We give examples of several concrete applications, where this more general model of bandits allows for richer and/or more efficient formulations of the problem.
 Removing Arbitrage from Wagering Mechanisms
with Yiling Chen, David Pennock, Jennifer Wortman Vaughan.
In Proc. ECM EC 2014
 Primal dual gives optimal energy efficient online algorithms
[pdf  Show/Hide Abstract]
with Zhiyi Huang.
In Proc. SODA 2014
We consider the problem of online scheduling of jobs on unrelated machines with dynamic speed scaling to minimize the sum of energy and weighted flow time. We give an algorithm with an almost optimal competitive ratio for arbitrary power functions. (No earlier results handled arbitrary power functions for unrelated machines.) For power functions of the form $f(s) = s^\alpha$ for some constant $\alpha>1$, we get a competitive ratio of $O(\tfrac {\alpha} {\log \alpha}) $, improving upon a previous competitive ratio of $O(\alpha^2)$ by Anand et al., along with a matching lower bound of $\Omega(\tfrac {\alpha} {\log \alpha})$. Further, in the resource augmentation model, with a 1+ $\epsilon$ speed up, we give a $2( \tfrac 1 \epsilon+1)$ competitive algorithm, with essentially the same techniques, improving the bound of $1 + O(\frac{1}{\epsilon^2})$ by Gupta et al. and matching the bound of Anand et al. for the special case of fixed speed unrelated machines. Unlike the previous results most of which used an amortized local competitiveness argument or dual fitting methods, we use a primaldual method, which is useful not only to analyze the algorithms but also to design the algorithm itself.
2013

Wholepage Optimization and Submodular Welfare Maximization with Online Bidders [pdf  Show/Hide Abstract]
with Zhiyi Huang, Nitish Korula, Vahab Mirrokni and Qiqi Yan.
In Proc. ACM EC 2013
In the context of online ad serving, display ads may appear on different types of webpages, where each
page includes several ad slots and therefore multiple ads can be shown on each page. The set of ads that
can be assigned to ad slots of the same page needs to satisfy various prespecified constraints including
exclusion constraints, diversity constraints, and the like. Upon arrival of a user, the ad serving system
needs to allocate a set of ads to the current webpage respecting these perpage allocation constraints.
Previous slotbased settings ignore the important concept of a page, and may lead to highly suboptimal
results in general. In this paper, motivated by these applications in display advertising and inspired by the
submodular welfare maximization problem with online bidders, we study a general class of pagebased ad
allocation problems, present the first (tight) constantfactor approximation algorithms for these problems,
and confirm the performance of our algorithms experimentally on realworld data sets.
A key technical ingredient of our results is a novel primaldual analysis for handling freedisposal, which
updates dual variables using a "level function" instead of a single level, and unifies with previous analyses
of related problems. This new analysis method allows us to handle arbitrarily complicated allocation
constraints for each page. Our main result is an algorithm that achieves a 1  1/e  o(1) competitive
ratio. Moreover, our experiments on realworld data sets show signi?cant improvements of our pagebased
algorithms compared to the slotbased algorithms.
Finally, we observe that our problem is closely related to the submodular welfare maximization (SWM)
problem. In particular, we introduce a variant of the SWM problem with online bidders, and show how to
solve this problem using our algorithm for whole page optimization.

Budget Smoothing for Internet Ad Auctions: A Game Theoretic
Approach [pdf  Show/Hide Abstract]
with Deeparnab Chakrabarty, Denis Charles, Max Chickering and Lei Wang.
In Proc. ACM EC 2013
In Internet ad auctions, search engines often throttle budget constrained advertisers so as to spread their
spends across the specified time period. Such policies are known as budget smoothing policies. In this paper,
we perform a principled, gametheoretic study of what the outcome of an ideal budget smoothing algorithm
should be. In particular, we propose the notion of regretfree budget smoothing policies whose outcomes
throttle each advertiser optimally, given the participation of the other advertisers. We show that regretfree
budget smoothing policies always exist, and in the case of single slot auctions we can give a polynomial time
smoothing algorithm. Inspired by the existence proof, we design a heuristic for budget smoothing which
performs considerably better than existing benchmark heuristics.

Priorfree Auctions for Budgeted Agents
[pdf  Show/Hide Abstract]
with Bach Q. Ha and Jason D. Hartline.
In Proc. ACM EC 2013
We consider priorfree auctions for revenue and welfare maximization when agents have a common budget.
The abstract environments we consider are ones where there is a downwardclosed and symmetric feasibility
constraint on the probabilities of service of the agents. These environments include position auctions where
slots with decreasing clickthrough rates are auctioned to advertisers. We generalize and characterize the
envyfree benchmark from Hartline and Yan [2011] to settings with budgets and characterize the optimal
envyfree outcomes for both welfare and revenue. We give priorfree mechanisms that approximate these
benchmarks. A building block in our mechanism is a clinching auction for position auction environments.
This auction is a generalization of the multiunit clinching auction of Dobzinski et al. [2008] and a special
case of the polyhedral clinching auction of Goel et al. [2012]. For welfare maximization, we show that this
clinching auction is a good approximation to the envyfree optimal welfare for position auction environments.
For profit maximization, we generalize the random sampling profit extraction auction from Fiat et al. [2002]
for digital goods to give a 10.0approximation to the envyfree optimal revenue in symmetric, downward
closed environments. Even without budgets this revenue maximization question is of interest and we obtain
an improved approximation bound of 7.5 (from 30.4 by Ha and Hartline [2012]).

Tatonnement Beyond Gross Substitutes? Gradient Descent to the Rescue [pdf  Show/Hide Abstract]
with Yun Kuen Cheung and Richard Cole.
In Proc. STOC 2013
Tatonnement is a simple and natural rule for updating prices
in Exchange (ArrowDebreu) markets. In this paper we de
?ne a class of markets for which tatonnement is equivalent
to gradient descent. This is the class of markets for which
there is a convex potential function whose gradient is always
equal to the negative of the excess demand and we call it
Convex Potential Function (CPF) markets. We show the
following results.
 CPF markets contain the class of Eisenberg Gale (EG)
markets, de?ned previously by Jain and Vazirani.
 The subclass of CPF markets for which the demand
is a differentiable function contains exactly those mar
kets whose demand function has a symmetric negative
semidefinite Jacobian.
 We de?ne a family of continuous versions of taton
nement based on gradient descent using a Bregman
divergence. As we show, all processes in this family
converge to an equilibrium for any CPF market. This
is analogous to the classic result for markets satisfying
the Weak Gross Substitutes property.
 A discrete version of tatonnement converges toward
the equilibrium for the following markets of comple
mentary goods; its convergence rate for these settings
is analyzed using a common potential function.
 Fisher markets in which all buyers have Leontief
utilities. The tatonnement process reduces the
distance to the equilibrium, as measured by the
potential function, to an epsilon fraction of its initial
value in O(1/epsilon) rounds of price updates.
 Fisher markets in which all buyers have comple
mentary CES utilities. Here, the distance to thequilibrium is reduced to epsilon fraction of its initial
value in O(log(1/epsilon)) rounds of price updates.
This shows that tatonnement converges for the entire
range of Fisher markets when buyers have complementary CES utilities, in contrast to prior work, which
could analyze only the substitutes range, together with
a small portion of the complementary range.

Randomized PrimalDual Analysis of RANKING for Online Bipartite Matching
[pdf  Show/Hide Abstract]
with Kamal Jain and Bobby Kleinberg.
In Proc.
SODA 2013
We give a simple proof that the RANKING algorithm of Karp, Vazirani and Vazirani is 11/e competitive for the online bipartite matching problem. The proof is via a randomized primaldual argument. Primaldual algorithms have been successfully used for many online algorithm problems, but the dual constraints are always satisfied deterministically. This is the first instance of a nontrivial randomized primaldual algorithm in which the dual constraints only hold in expectation. The approach also generalizes easily to the vertexweighted version considered by Agarwal et al. Further we show that the proof is very similar to the deterministic primaldual argument for the online budgeted allocation problem with small bids (also called the AdWords problem) of Mehta et al.
2012

Asymptotically Optimal Algorithm for Stochastic Adwords
[pdf  Show/Hide Abstract]
with Balasubramanian Sivan and Yossi Azar.
In Proc.
EC 2012
In this paper we consider the adwords problem in the unknown distribution
model. We consider the case where the budget to bid ratio $k$ is at least 2,
and give improved competitive ratios. Earlier results had competitive ratios
better than $11/e$ only for ``large enough'' $k$, while our competitive ratio
increases continuously with $k$. For $k=2$ the competitive ratio we get is
$0.729$ and it is $0.9$ for $k=16$. We also improve the asymptotic competitive
ratio for large $k$ from $1  O(\sqrt{\log n /k})$ to $1  O(\sqrt{1 /k})$, thus
removing any dependence on $n$, the number of advertisers. This ratio is
optimal, even with known distributions. That is, even if an algorithm is
tailored to the distribution, it cannot get a competitive ratio of $1 
o(\sqrt{1 /k})$, whereas our algorithm does not depend on the distribution. The
algorithm is rather simple, it computes a score for every advertiser based
on his original budget, the remaining budget and the remaining number of steps
in the algorithm and assigns a query to the advertiser with the highest bid plus
his score. The analysis is based on a ``hybrid argument'' that considers
algorithms that are part actual, part hypothetical, to prove that our (actual)
algorithm is better than a completely hypothetical algorithm whose performance
is easy to analyze.

Online Matching with Concave Returns
[pdf  Show/Hide Abstract]
with Kamal Jain.
In Proc.
STOC 2012
We consider a significant generalization of the Adwords problem by allowing arbitrary concave returns,
and we characterize the optimal competitive ratio achievable.
The problem considers a sequence of items arriving online that have to be allocated to agents, with different agents bidding different amounts.
The objective function is the sum, over each agent i, of a monotonically nondecreasing concave function $M_i : R_+ \rightarrow R_+$ of the total amount allocated to i.
All variants of online matching problems (including the Adwords problem) studied in the literature consider the special case of budgeted linear functions,
that is, functions of the form $M_i( u_i) = \min \{u_i,B_i\}$ for some constant $B_i$.
The distinguishing feature of this paper is in allowing arbitrary concave returns.
The main result of this paper is that for each concave function $M$, there exists a constant $F(M) \leq 1$ such that
 there exists an algorithm with competitive ratio of \\$\min_i\{ F(M_i) \}$, independent of the sequence of items.
 No algorithm has a competitive ratio larger than $F(M)$ over all instances with $M_i= M$ for all $i$.
Our algorithm is based on the primaldual paradigm and makes use of convex programming duality.
The upper bounds are obtained by formulating the task of finding the right counterexample as an optimization problem.
This path takes us through the calculus of variations which deals with optimizing over continuous functions.
The algorithm and the upper bound are related to each other via a set of differential equations, which
points to a certain kind of duality between them.
2011

PriorIndependent Multiparameter Mechanism Design
[pdf  Show/Hide Abstract]
with Jason D. Hartline, Anna R. Karlin, C. Thach Nguyen
In Proc.
WINE 2011
In a multiunit unitdemand multiitem auction, an auctioneer is selling a
collection of different items to a set of agents each interested
in buying at most unit. Each agent has a different
private value for each of the items. We consider the problem of designing a
truthful auction that maximizes the auctioneer's profit in this
setting. Previously, there has been progress on this problem
in the setting in which each value is drawn from a known prior
distribution. Specifically, it has been shown how to design auctions tailored
to these priors that achieve a constant factor approximation ratio profit.
In this paper, we present the first priorindependent auction for this
setting. This auction is guaranteed to achieve a constant fraction
of the optimal expected profit for a large class of, so called, ``regular'' distributions,
without specific knowledge of the distributions.

Online Algorithms with Stochastic Input
[pdf ]
In
ACM SIGEcom Exchanges, Vol. 10, No. 2, June 2011, Pages 40 49.

Near Optimal Online Algorithms and Fast Approximation
Algorithms for Resource Allocation Problems
[pdf  Show/Hide Abstract]
with Kamal Jain, Balasubramanian Sivan and Christopher A. Wilkens
In Proc.
ACM EC 2011
We present algorithms for a class of resource allocation problems both in the online setting with stochastic input
and in the ofﬂine setting. This class of problems contains many interesting special cases such as the Adwords problem.
In the online setting we introduce a new distributional model called the adversarial stochastic input model, which is
a generalization of the i.i.d model with unknown distributions, where the distributions can change over time. In this
model we give a 1  O(epsilon) approximation algorithm for the resource allocation problem, with almost the weakest
possible assumption: the ratio of the maximum amount of resource consumed by any single request to the total capacity
of the resource, and the ratio of the proﬁt contributed by any single request to the optimal proﬁt is at most O (epsilon^2 /log(n/epsilon) where n is the number of resources available. There are instances where this ratio is epsilon^2/log n such that no randomized
algorithm can have a competitive ratio of 1  o(epsilon^2) even in the i.i.d model. The upper bound on ratio that we require
improves on the previous upperbound for the i.i.d case by a factor of n.
Our proof technique also gives a very simple proof that the greedy algorithm has a competitive ratio of 1  1/e for
the Adwords problem in the i.i.d model with unknown distributions, and more generally in the adversarial stochastic
input model, when there is no bound on the bid to budget ratio. All the previous proofs assume that either bids are very
small compared to budgets or something very similar to this.
In the ofﬂine setting we give a fast algorithm to solve very large LPs with both packing and covering constraints.
We give algorithms to approximately solve (within a factor of 1 + epsilon) the mixed packingcovering problem with
O(gamma m log(n/delta)/epsilon^2 ) oracle calls where the constraint matrix of this LP has dimension n x m, the success probability
of the algorithm is 1  delta, and gamma is a parameter which is very similar to the ratio described for the online setting.
We discuss several applications, and how our algorithms improve existing results in some of these applications.

Distributed Algorithms via Gradient Descent
for Fisher Markets
[pdf  Show/Hide Abstract]
with Benjamin Birnbaum and Lin Xiao.
In Proc.
ACM EC 2011
Designing distributed algorithms that converge quickly to an equi
librium is one of the foremost research goals in algorithmic game
theory, and convex programs have played a crucial role in the de
sign of algorithms for Fisher markets. In this paper we shed new
light on both aspects for Fisher markets with linear and spending
constraint utilities. We show fast convergence of the Proportional
Response dynamics recently introduced by Wu and Zhang [WZ07].
The convergence is obtained from a new perspective: we show that
the Proportional Response dynamics is equivalent to a gradient de
scent algorithm (with respect to a Bregman divergence instead of
euclidean distance) on a convex program that captures the equilib
ria for linear utilities. We further show that the convex program
program easily extends to the case of spending constraint utilities,
thus resolving an open question raised by [Vaz10]. This also gives
a way to extend the Proportional Response dynamics to spending
constraint utilties. We also prove a technical result that is interest
ing in its own right: that the gradient descent algorithm based on a
Bregman divergence converges with rate O(1/t) under a condition
that is weaker than having Lipschitz continuous gradient (which is
the usual assumption in the optimization literature for obtaining the
same rate).
2010

Fast Algorithms for Finding Matchings in Lopsided
Bipartite Graphs with Applications to Display Ads
[pdf  Show/Hide Abstract]
with Denis Charles, Max Chickering, Kamal Jain and Manan Sanghi.
In Proc.
ACM EC 2010
We derive efficient algorithms for both detecting and representing
matchings in lopsided bipartite graphs; such graphs have so many
nodes on one side that it is infeasible to represent them in memory or
to identify matchings using standard approaches. Detecting and
representing matchings in lopsided bipartite graphs is important for
allocating and delivering guaranteedplacement display
ads, where the corresponding bipartite graph of interest has nodes
representing advertisers on one side and nodes representing webpage
impressions on the other; realworld instances of such graphs can have
billions of impression nodes. We provide theoretical guarantees for
our algorithms, and in a realworld advertising application, we
demonstrate the feasibility of our detection algorithms.

Monotonicity in Bargaining Networks
[pdf  Show/Hide Abstract]
with Yossi Azar, Kamal Jain and Yuval Rabani.
In Proc.
SODA 2010
We study bargaining networks, discussed in a recent paper of Kleinberg and
Tardos, from the perspective of cooperative game theory. In
particular we examine three solution concepts, the nucleolus, the core
center and the core median.
All solution concepts define unique solutions, so they provide
testable predictions.
We define a new monotonicity property that is a natural axiom of any bargaining game solution,
and we prove that all three of them satisfy this monotonicity property.
This is actually in contrast to the conventional wisdom for general cooperative games that
monotonicity and the core condition (which is a basic property that all three of them satisfy)
are incompatible with each other.
Our proofs are based on a
primaldual argument (for the nucleolus) and on the FKG inequality (for the
core center and the core median). We further observe some qualitative differences between the
solution concepts. In particular, there are cases where a strict version of
our monotonicity property is a natural axiom, but only the core center and the core median
satisfy it. On the other hand, the nucleolus is easy to compute, whereas
computing the core center or the core median is #Phard (yet it can be approximated in
polynomial time).
2009

Convergence of Local Dynamics to Balanced Outcomes in Exchange Networks
[pdf  Show/Hide Abstract]
with Yossi Azar, Benjamin Birnbaum, L. Elisa Celis and Yuval Peres.
In Proc.
FOCS 2009
Bargaining games on exchange networks have been
studied by both economists and sociologists. A Balanced Out
come for such a game is an equilibrium concept that
combines notions of stability and fairness. In a recent paper,
Kleinberg and Tardos introduced balanced outcomes to the
computer science community and provided a polynomialtime
algorithm to compute the set of such outcomes. Their work left
open a pertinent question: are there natural, local dynamics that
converge quickly to a balanced outcome? In this paper, we provide
a partial answer to this question by showing that simple edge
balancing dynamics converge to a balanced outcome whenever one
exists.

The Price of Truthfulness for PayPerClick Auctions
[pdf  Show/Hide Abstract]
with Sham Kakade.
In Proc.
ACM EC 2009.
We analyze the problem of designing a truthful payperclick auction where
the clickthroughrates (CTR) of the bidders are unknown to the auction. Such an auction faces the classic explore/exploit dilemma: while gathering information about the click through rates of advertisers, the mechanism may loose revenue; however, this gleaned information may prove valuable in the future for a more profitable allocation. In this sense, such mechanisms are prime candidates to be designed using multiarmed bandit techniques. However, a naive application of multiarmed bandit algorithms would not take into account the strategic considerations of the players  players might manipulate their bids (which determine the auction's revenue) in a way as to maximize their own utility. Hence,
we consider the natural restriction that the auction be truthful.
The revenue that we could hope to achieve is the expected revenue of a
Vickrey auction that knows the true CTRs, and we define the truthful regret to be the difference
between the expected revenue of the auction and this Vickrey revenue. This work sharply characterizes what regret is achievable, under a truthful restriction. We show that this truthful restriction imposes statistical limits on the achievable regret  the achievable regret is $\tilde{\Theta}(T^{2/3})$, while for traditional bandit algorithms (without the truthful restriction) the achievable regret is $\tilde{\Theta}(T^{1/2})$ (where $T$ is the number of rounds). We term the extra $T^{1/6}$ factor, the `price of truthfulness'.

The Adwords Problem: Online Keyword Matching
with Budgeted Bidders under Random Permutations
[pdf  Show/Hide Abstract]
with Tom Hayes.
In Proc.
ACM EC 2009.
We consider the problem of a search engine trying to assign a sequence
of search keywords to a set of competing bidders, each with a daily
spending limit. The goal is to maximize the revenue generated by these
keyword sales, bearing in mind that, as some bidders may eventually
exceed their budget, not all keywords should be sold to the highest bidder.
We assume that the sequence of keywords (or equivalently, of bids)
is revealed online. Our concern will be the competitive ratio for this
problem versus the offline optimum.
We extend the current literature on this problem by considering the setting
where the keywords arrive in a random order. In this setting we are able to
achieve a competitive ratio of $1\epsilon$ under some mild, but necessary,
assumptions. In contrast, it is already known that when the keywords arrive
in an adversarial order, the best competitive ratio is bounded away from 1.
Our algorithm is motivated by PAC learning, and proceeds in two parts: a
training phase, and an exploitation phase.

Limited and Online Supply and the Bayesian foundations of priorfree mechanism design
[pdf  Show/Hide Abstract]
with Jason Hartline.
In Proc.
ACM EC 2009.
We study auctions for selling a limited supply of a single commodity
in the case where the supply is known in advance and the case it is
unknown and must be instead allocated in an online fashion. The
latter variant was proposed by Mahdian and Saberi as a model of an
important phenomena in auctions for selling Internet advertising:
advertising impressions must be allocated as they arrive and the total
quantity available is unknown in advance. We describe the Bayesian
optimal mechanism for these variants and extend the random sampling
auction of Goldberg et al. to address the priorfree
case.

Cloud Scheduling with Setup Cost
[pdf Show/Hide Abstract]
with Yossi Azar, Naama BenAroya and Navendu Jain.
In Proc.
SPAA 2013
In this paper, we investigate the problem of online task
scheduling of jobs such as MapReduce jobs, Monte Carlo
simulations and generating search index from web
documents, on cloud computing infrastructures. We consider the
virtualized cloud computing setup comprising machines that
host multiple identical virtual machines (VMs) under payasyougo charging,
and that booting a VM requires a constant setup time. The cost of job computation depends on
the number of VMs activated, and the VMs can be activated
and shutdown on demand. We propose a new biobjective
algorithm to minimize the maximum task delay, and the
total cost of the computation. We study both the clairvoyant case,
where the duration of each task is known upon its
arrival, and the more realistic nonclairvoyant case.

An O(n log n) Algorithm for a
Load Balancing Problem on Paths
[pdf Show/Hide Abstract]
with Uri Feige.
In Proc.
WADS 2011
We study the following load balancing problem on paths
(PB). There is a path containing n vertices. Every vertex i has an initial
load h i , and every edge (j, j + 1) has an initial load w j that it needs
to distribute among the two vertices that are its endpoints. The goal is
to distribute the load of the edges over the vertices in a way that will
make the loads of the vertices as balanced as possible (formally, mini
mizing the sum of squares of loads of the vertices). This problem can be
solved in polynomial time, e.g, by dynamic programming. We present an
algorithm that solves this problem in time O(n log n).
As a mental aide in the design of our algorithm, we ﬁrst design a hy
draulic apparatus composed of bins (representing vertices), tubes (rep
resenting edges) that are connected between bins, cylinders within the
tubes that constrain the ﬂow of water, and valves that can close the con
nections between bins and tubes. Water may be poured into the various
bins, to levels that correspond to the initial loads in the input to the PB
problem. When all valves are opened, the water ﬂows between bins (to
the extent that is feasible due to the cylinders) and stabilizes at levels
that are the correct output to the respective PB problem. Our algorithm
is based on a fast simulation of the behavior of this hydraulic apparatus,
when valves are opened one by one.

RealTime Bidding Algorithms for PerformanceBased Display Ad Allocation
[pdf Show/Hide Abstract]
with Ye Chen, Pavel Berkhin and Bo Anderson .
(To appear) In Proc.
KDD 2011
We describe a realtime bidding algorithm for performancebased display ad allocation. A central issue in performance display advertising is matching campaigns to ad impressions, which can be formulated as a constrained optimization problem that maximizes revenue subject to constraints such as budget limits and inventory availability. The current practice is to solve the optimization problem offline at a tractable level of impression granularity (e.g., the placement level), and to serve ads online based on the precomputed static delivery scheme. Although this offline approach takes a global view to achieve optimality, it fails to scale to ad delivery decision making at an individual impression level. Therefore, we propose a realtime bidding algorithm that enables finegrained impression valuation (e.g., targeting users with realtime conversion data), and adjusts valuebased bid according to realtime constraint snapshot (e.g., budget consumption level). Theoretically, we show that under a linear programming (LP) primaldual formulation, the simple realtime bidding algorithm is indeed an online solver to the original primal problem by taking the optimal solution to the dual problem as input. In other words, the online algorithm guarantees the offline optimality given the same level of knowledge an offline optimization would have. Empirically, we develop and experiment with two realtime bid adjustment approaches to adapting to the nonstationary nature of the marketplace: one adjusts bids against realtime constraint satisfaction level using controltheoretic methods, and the other adjusts bids also based on the historical bidding landscape statistically modeled. Finally, we show experimental results with realworld ad serving data.

Local Dynamics in Bargaining Networks via RandomTurn Games
[pdf Show/Hide Abstract]
with Elisa Celis and Yuval Peres.
In Proc.
WINE 2010
We present a new technique for analyzing the rate of convergence
of local dynamics in bargaining networks. The technique reduces
balancing in a bargaining network to optimal play in a randomturn
game. We analyze this game using techniques from martingale and
Markov chain theory. We obtain a tight polynomial bound on the rate
of convergence for a nontrivial class of unweighted graphs (the previous
known bound was exponential). Additionally, we show this technique
extends naturally to many other graphs and dynamics.

Market Equilibrium with Transaction Costs
[pdf Show/Hide Abstract]
with Sourav Chakraborty and Chinmay Karande.
In Proc.
WINE 2010
Identical products being sold at different prices in different
locations is a common phenomenon. To model such scenarios, we supplement the classical Fisher market model by introducing {\em transaction costs}. For every buyer $i$ and good $j$, there is a transaction cost of $c_{ij}$; if the price of good $j$ is $p_j$, then the cost to the buyer $i$ {\em per unit} of $j$ is $p_j + c_{ij}$. The same good can thus be sold at different (effective) prices to different buyers. We provide a combinatorial algorithm that computes $\epsilon$approximate equilibrium prices and allocations in $O\left(\frac{1}{\epsilon}(n+\log{m})mn\log(B/\epsilon)\right)$
operations  where $m$ is the number goods, $n$ is the number of buyers and $B$ is the sum of the budgets of all the buyers.

An Online Multiunit Auction with Improved Competitive Ratio
[pdf  Show/Hide Abstract]
with Sourav Chakraborty.
In Proc.
WINE 2009.
We improve the best known competitive ratio (from 1/4 to 1/2),
for the online multiunit allocation problem,
where the objective is to maximize the singleprice revenue.
Moreover, the competitive ratio of our algorithm tends to 1,
as the bidprofile tends to "smoothen".
This algorithm is used as a subroutine in designing truthful auctions
for the same setting: the allocation has to be done online,
while the payments can be decided at the end of the day.
Earlier, a reduction from the auction design problem to the allocation problem
was known only for the unitdemand case.
We give a reduction for the general case when the bidders have
decreasing marginal utilities.
The problem is inspired by sponsored search auctions.

A Computational Theory of Awareness and Decision Making
[pdf  Show/Hide Abstract]
with Lance Fortnow.
In Proc.
Theoretical Aspects of Rationality and Knowledge, TARK 2009
We exhibit a new computationalbased definition of awareness,
informally that our level of unawareness of an object is the amount
of time needed to generate that object within a certain environment.
We give several examples to show this notion matches our intuition
in scenarios where one organizes, accesses and transfers
information. We also give a formal processindependent definition of
awareness based on Levin's universal enumeration.
We show the usefulness of computational awareness by showing how it
relates to decision making, and how others can manipulate our
decision making with appropriate advertising, in particular, we show
connections to sponsored search and brand awareness. Understanding
awareness can also help rate the effectiveness of various user
interfaces designed to access information.

Market Equilibria in Polynomial time for
fixed number of goods or agents
[pdf  Show/Hide Abstract]
with Ravi Kannan.
In Proc.
FOCS 2008.
We consider markets in the classical ArrowDebreu model. There are n
agents and m goods. Each buyer has a concave utility function
(of the bundle of goods he/she buys) and an initial bundle. At an
``equilibrium'' set of prices for goods, if each individual buyer
separately exchanges the initial bundle for an optimal bundle at
the set prices, the market clears, i.e., all goods are exactly
consumed. Classical theorems guarantee the existence of equilibria,
but computing them has been the subject of much recent research.
In the related area of MultiAgent Games, much attention has been paid to
the complexity as well as algorithms.
While most general problems are
hard, polynomial time algorithms have been developed for restricted
classes of games, when one assumes the
number of strategies is constant.
For the Market Equilibrium problem, several important special cases of
utility functions have been tackled.
Here we begin a program for this
problem similar to that for multiagent games, where general utilities
are considered. We begin by showing that if the utilities are
separable piecewise linear concave (PLC) functions, and the number of
goods (or alternatively the number of buyers) is constant, then we can
compute an exact equilibrium in polynomial time. Our
technique for the constant number of goods is to decompose the space
of price vectors into cells using certain hyperplanes, so that in each
cell, each buyer's threshold marginal utility is known. Still, one
needs to solve a linear optimization problem in each cell.
We then show the main result  that for general
(nonseparable) PLC utilities, an exact equilibrium can be found in
polynomial time provided the number of goods is constant. The starting
point of the algorithm is a ``celldecomposition'' of the space of
price vectors using polynomial surfaces (instead of hyperplanes).
We use results from computational algebraic geometry to bound the
number of such cells.
For solving the problem inside each cell, we introduce and use a novel
LPduality based method. We note that if the number of
buyers and agents both can vary, the problem is PPAD hard even for the
very special case of PLC utilities such as Leontief utilities and separable PLC utilities.

New GeometryInspired Relaxations and Algorithms for the Metric Steiner Tree Problem.
[pdf  Show/Hide Abstract]
with Deeparnab Chakrabarty, and Vijay Vazirani.
To appear in Math Programming (Series A) Volume 122, Number 2 (April 2010).
(Preliminary version in IPCO 2008. )
Determining the integrality gap of the bidirected cut relaxation for the metric Steiner tree
problem, and exploiting it algorithmically, is a longstanding open problem. We use geometry
to define an LP whose dual is equivalent to this relaxation. This opens up the possibility of
using the primaldual schema in a geometric setting for designing an algorithm for this problem.
Using this approach, we obtain a 4/3 factor algorithm and integrality gap bound for the case
of quasibipartite graphs; the previous best integrality gap upper bound being 3/2. We
also obtain a factor $sqrt{2}$ strongly polynomial algorithm for this class of graphs.
A key difficulty experienced by researchers in working with the bidirected cut relaxation
was that any reasonable dual growth procedure produces extremely unwieldy dual solutions.
A new algorithmic idea helps finesse this difficulty  that of reducing the cost of certain edges
and constructing the dual in this altered instance  and this idea can be extracted into a new
technique for running the primaldual schema in the setting of approximation algorithms.

On Computing the Distinguishing Numbers of Planar Graphs and Beyond: a Counting Approach
[pdf  Show/Hide Abstract]
with V. Arvind and Christine Cheng.
In
SIAM Journal on Discrete Mathematics, vol. 22, no. 4 (October 2008), pp. 12971324.
A vertex klabeling of graph G is distinguishing if the only automorphism that preserves the labels
of G is the identity map. The distinguishing number of G, D(G), is the smallest integer k for which G
has a distinguishing klabeling. In this paper, we apply the principle of inclusionexclusion and develop
recursive formulas to count the number of inequivalent distinguishing klabelings of a graph. Along the
way, we prove that the distinguishing number of a planar graph can be computed in time polynomial in
the size of the graph.

On the Equivalence of Competitive and Submodular markets
[pdf  Show/Hide Abstract]
with Deeparnab Chakrabarty.
In Operations Research Letters, vol. 37, no. 3, pp. 155  158, 2009.
Prelim. version in Proc.
WINE 2007
In this paper, we study competitive markets  a market is
competitive if increasing the endowment of any one buyer does not in
crease the equilibrium utility of any other buyer. In the Fisher setting,
competitive markets contain all markets with weak gross substitutabil
ity (WGS), a property which enable efficient algorithms for equilibrium
computation.
We show that every uniform utility allocation (UUA) market which is
competitive, is a submodular utility allocation (SUA) market. Our result
provides evidence for the existence of efficient algoritheorems for the class
of competitive markets.

Computing Market Equilibrium: Beyond Weak Gross Substitutes
[pdf  Show/Hide Abstract]
with Chinmay Karande.
In Proc.
WINE 2007
The property of Weak Gross Substitutibility (WGS) of goods in a market has been found
to be conducive to efficient algorithms for finding equilibria. In this paper, we give a natural
definition of a $\delta$ approximate WGS property, and show that the auction algorithm of Garg and Kapoor
can be extended to give an $\epsilon + \delta$ approximate equilibrium for markets with this
property.

New results on Rationality and Strongly Poylnomial Solvability in EisenbergGale markets
[pdf  Show/Hide Abstract]
with Deeparnab Chakrabarty and Vijay Vazirani.
In Proc.
WINE 2006
We study the structure of EG(2) markets, the class of EisenbergGale markets with
two agents. We prove that all markets in this class are rational and they admit strongly
polynomial algorithms whenever the polytope containing the set of feasible utilities of
the two agents can be described via a combinatorial LP. This helps resolve positively
the status of two markets left as open problems by Jain and Vazirani: the capacity allocation
market in a directed graph with two sourcesink pairs and the network coding market
in a directed network with two sources.
Our algorithms for solving the corresponding nonlinear convex programs are
fundamentally different from those obtained by Jain and Vazirani; whereas they use the primal
dual schema, we use a carefully constructed binary search.

Integrality Gaps for Sparsest Cut and Minimum Linear Arrangement Problems
[pdf  Show/Hide Abstract]
with Subhash A. Khot, Rishi Saket and Nisheeth K. Vishnoi.
In Proc.
STOC 2006.
Arora, Rao and Vazirani
showed that the standard semidefinite programming (SDP)
relaxation of the sparsest cut problem with the triangle
inequality
constraints has an integrality gap of $O(\sqrt{\log n})$. They
conjectured that the gap is bounded from above by a constant. In this paper, we disprove
this conjecture
by constructing an $\Omega(\log \log n)$ integrality gap
instance. Khot and Vishnoi had earlier disproved the
nonuniform version of this Conjecture.
A simple ``stretching'' of the integrality gap instance for the
sparsest cut problem serves as an $\Omega(\log \log n)$ integrality gap
instance for the SDP relaxation of the Minimum Linear Arrangement problem. This
SDP relaxation was considered in Charikar et. al. and Feige and Lee, where it was shown that its integrality gap is bounded
from above by $O(\sqrt{\log n} \log \log n).$

Price of Anarchy, Locality Gap, and
a Network Service Provider Game
[pdf  Show/Hide Abstract]
with Naveen Garg, Rohit Khandekar,
Vinayaka Pandit, Amin Saberi, and Vijay V. Vazirani.
In Proc.
WINE 2005
In this paper, we define a network service provider game. We show that
the price of anarchy of the defined game can be bounded by analyzing
a local search heuristic for a related facility location
problem called the $k$facility location problem. As a result, we show
that the $k$facility location problem has a locality gap of 5. This
result is of interest on its own. Our result gives evidence to the
belief that the price of anarchy of certain games are related to
analysis of local search heuristics.

On the complexity of Hilbert's 17th
problem
[pdf  Show/Hide Abstract]
with Richard J. Lipton and Nisheeth Vishnoi.
In Proc.
FSTTCS 2004.
Hilbert posed the following problem as the 17th in the list of 23 problems in his famous 1900 lecture:
Given a
multivariate polynomial that takes only nonnegative values over
the reals, can it be represented as a sum of squares of rational
functions?
In 1927, E.~Artin gave an affirmative answer to this question.
His result guaranteed the existence of such a finite representation and
raised the following important question:
What is the {\bf minimum number} of rational functions needed to represent
any nonnegative $n$variate, degree $d$ polynomial?
In 1967, Pfister proved that any $n$variate nonnegative
polynomial over the reals can be written as sum of squares of at most $2^n$ rational functions.
In spite of a considerable effort by
mathematicians for over 75 years, it is not known
whether $n+2$ rational functions are sufficient!
In lieu of the lack of progress towards the resolution of this question, we initiate the study of Hilbert's 17th problem from the point
of view of Computational Complexity.
In this setting, the following question is a natural relaxation:
What is the {\bf descriptive complexity} of the sum of squares representation (as rational functions) of
a nonnegative, $n$variate, degree $d$ polynomial?
We consider arithmetic circuits as a natural representation of rational functions.
We are able to show, assuming a standard conjecture in complexity theory, that it is impossible that every nonnegative, $n$variate, degree four
polynomial can be represented as a sum of squares of a small (polynomial in $n$) number of rational functions, each of which has a small size arithmetic circuit (over the rationals) computing it.
Our result points to the direction that it is unlikely
that every nonnegative, $n$variate polynomial over the reals can be written as a sum of squares of a polynomial (in $n$) number of rational functions. Further, relating to standard (and believed to be hard to prove) complexitytheoretic conjectures sheds some light on why it has been difficult for mathematicians to close the $n+2$ and $2^n$ gap. We hope that our line of work will play an important role in the resolution of this question.

The Spending Constraint Model for Market
Equilibrium: Algorithmic, Existence and Uniqueness results
[pdf  Show/Hide Abstract]
with
Vijay V. Vazirani.
In Proc.
STOC 2004.
The traditional model of market equilibrium supports impressive existence results, including the celebrated
ArrowDebreu Theorem. However, in this model, polynomial time
algorithms for computing (or approximating) equilibria are
known only for linear utility functions. We present a new,
and natural, model of market equilibrium that not only admits existence and uniqueness results paralleling those for
the traditional model but is also amenable to efficient algorithms.

An Improved Approximation Scheme for
Computing ArrowDebreu Prices for the Linear Case
[pdf  Show/Hide Abstract]
with Vijay Vazirani.
In Proc.
FSTTCS 2003.
Recently, Jain, Mahdian and Saberi
had given a FPTAS for the problem of computing a market
equilibrium in the ArrowDebreu setting, when the utilities are linear functions. Their running
time depended on the size of the numbers representing the
utilities and endowments of the buyers. In this paper, we give a
strongly polynomial time approximation scheme for this problem.
Our algorithm builds upon the main ideas
behind the algorithm in Devanur et. al.

Strategyproof costsharing Mechanisms
for Set Cover and Facility Location Games
[pdf  Show/Hide Abstract]
with Milena Mihail and Vijay Vazirani.
Decision Support Systems 39 (March 2005), pp 1122.
Prelim. version in Proc.
ACM EC 2003
Strategyproof costsharing mechanisms, lying in the core,
that recover $1/\alpha$ fraction of the cost, are
presented for the set cover and facility location games; $\alpha =
O(\log n)$ for the former and 1.861 for the latter. Our mechanisms
utilize approximation algorithms for these problems based
on the method of dualfitting.

Market Equilibrium via a PrimalDualType Algorithm
[pdf  Show/Hide Abstract]
with Christos H. Papadimitriou, Amin Saberi and Vijay V.Vazirani.
In The Journal of the ACM, vol. 55, no. 5 (October 2008), pp 118.
Prelim version appeared in FOCS 2002.
We give the first polynomial time algorithm for exactly computing an equilibrium
for the linear utilities case of the market model defined by Fisher.
Our algorithm uses the primaldual paradigm in the enhanced setting of KKT conditions
and convex programs. We pinpoint the added difficulty raised by this setting and the
manner in which our algorithm circumvents it.
