
I am a researcher in the theory group
in Microsoft Research, Redmond.
I am interested in what I call Automated Economics, which studies the question of how
technology can be used to improve the efficiency of economic systems.
My other interest is in Algorithms: I am interested in designing algorithms that are faster, simpler,
work online or in a distributed fashion, for some of the basic combinatorial optimization problems.
I am teaching a course on online algorithms at UW this winter quarter, 2013.
Email:
ndevanur[AT]gmail[DOT]com


Computation of Equilibria



Fisher Markets and Convex Programs
[pdf Show/Hide Abstract]
Working Paper
Convex programming duality is usually stated in its most general form, with convex objective functions and convex constraints.
At this level of generality the process of constructing a dual of a convex program is not so simple, in contrast to LP duality where
there is a simple set of rules one can use for the same purpose.
In this article, we consider a special class of convex programs: with convex objective functions and {\em linear} constraints,
and derive a simple set of rules to construct the dual, analogous to LPs.
We also show applications of this: the main application in this article will be to derive convex programs for Fisher markets.

Market Equilibrium with Transaction Costs
[pdf Show/Hide Abstract]
with Sourav Chakraborty and Chinmay Karande.
In Proc.
WINE 2010
Identical products being sold at different prices in different
locations is a common phenomenon. To model such scenarios, we supplement the classical Fisher market model by introducing {\em transaction costs}. For every buyer $i$ and good $j$, there is a transaction cost of $c_{ij}$; if the price of good $j$ is $p_j$, then the cost to the buyer $i$ {\em per unit} of $j$ is $p_j + c_{ij}$. The same good can thus be sold at different (effective) prices to different buyers. We provide a combinatorial algorithm that computes $\epsilon$approximate equilibrium prices and allocations in $O\left(\frac{1}{\epsilon}(n+\log{m})mn\log(B/\epsilon)\right)$
operations  where $m$ is the number goods, $n$ is the number of buyers and $B$ is the sum of the budgets of all the buyers.

Market Equilibria in Polynomial time for
fixed number of goods or agents
[pdf  Show/Hide Abstract]
with Ravi Kannan.
In Proc.
FOCS 2008.
We consider markets in the classical ArrowDebreu model. There are n
agents and m goods. Each buyer has a concave utility function
(of the bundle of goods he/she buys) and an initial bundle. At an
``equilibrium'' set of prices for goods, if each individual buyer
separately exchanges the initial bundle for an optimal bundle at
the set prices, the market clears, i.e., all goods are exactly
consumed. Classical theorems guarantee the existence of equilibria,
but computing them has been the subject of much recent research.
In the related area of MultiAgent Games, much attention has been paid to
the complexity as well as algorithms.
While most general problems are
hard, polynomial time algorithms have been developed for restricted
classes of games, when one assumes the
number of strategies is constant.
For the Market Equilibrium problem, several important special cases of
utility functions have been tackled.
Here we begin a program for this
problem similar to that for multiagent games, where general utilities
are considered. We begin by showing that if the utilities are
separable piecewise linear concave (PLC) functions, and the number of
goods (or alternatively the number of buyers) is constant, then we can
compute an exact equilibrium in polynomial time. Our
technique for the constant number of goods is to decompose the space
of price vectors into cells using certain hyperplanes, so that in each
cell, each buyer's threshold marginal utility is known. Still, one
needs to solve a linear optimization problem in each cell.
We then show the main result  that for general
(nonseparable) PLC utilities, an exact equilibrium can be found in
polynomial time provided the number of goods is constant. The starting
point of the algorithm is a ``celldecomposition'' of the space of
price vectors using polynomial surfaces (instead of hyperplanes).
We use results from computational algebraic geometry to bound the
number of such cells.
For solving the problem inside each cell, we introduce and use a novel
LPduality based method. We note that if the number of
buyers and agents both can vary, the problem is PPAD hard even for the
very special case of PLC utilities such as Leontief utilities and separable PLC utilities.

The Spending Constraint Model for Market
Equilibrium: Algorithmic, Existence and Uniqueness results
[pdf  Show/Hide Abstract]
with
Vijay V. Vazirani.
In Proc.
STOC 2004.
The traditional model of market equilibrium supports impressive existence results, including the celebrated
ArrowDebreu Theorem. However, in this model, polynomial time
algorithms for computing (or approximating) equilibria are
known only for linear utility functions. We present a new,
and natural, model of market equilibrium that not only admits existence and uniqueness results paralleling those for
the traditional model but is also amenable to efficient algorithms.

Market Equilibrium via a PrimalDualType Algorithm
[pdf  Show/Hide Abstract]
with Christos H. Papadimitriou, Amin Saberi and Vijay V.Vazirani.
In The Journal of the ACM, vol. 55, no. 5 (October 2008), pp 118.
Prelim version appeared in FOCS 2002.
We give the first polynomial time algorithm for exactly computing an equilibrium
for the linear utilities case of the market model defined by Fisher.
Our algorithm uses the primaldual paradigm in the enhanced setting of KKT conditions
and convex programs. We pinpoint the added difficulty raised by this setting and the
manner in which our algorithm circumvents it.

Computing Market Equilibrium: Beyond Weak Gross Substitutes
[pdf  Show/Hide Abstract]
with Chinmay Karande.
In Proc.
WINE 2007
The property of Weak Gross Substitutibility (WGS) of goods in a market has been found
to be conducive to efficient algorithms for finding equilibria. In this paper, we give a natural
definition of a $\delta$ approximate WGS property, and show that the auction algorithm of Garg and Kapoor
can be extended to give an $\epsilon + \delta$ approximate equilibrium for markets with this
property.

New results on Rationality and Strongly Poylnomial Solvability in EisenbergGale markets
[pdf  Show/Hide Abstract]
with Deeparnab Chakrabarty and Vijay Vazirani.
In Proc.
WINE 2006
We study the structure of EG(2) markets, the class of EisenbergGale markets with
two agents. We prove that all markets in this class are rational and they admit strongly
polynomial algorithms whenever the polytope containing the set of feasible utilities of
the two agents can be described via a combinatorial LP. This helps resolve positively
the status of two markets left as open problems by Jain and Vazirani: the capacity allocation
market in a directed graph with two sourcesink pairs and the network coding market
in a directed network with two sources.
Our algorithms for solving the corresponding nonlinear convex programs are
fundamentally different from those obtained by Jain and Vazirani; whereas they use the primal
dual schema, we use a carefully constructed binary search.

An Improved Approximation Scheme for
Computing ArrowDebreu Prices for the Linear Case
[pdf  Show/Hide Abstract]
with Vijay Vazirani.
In Proc.
FSTTCS 2003.
Recently, Jain, Mahdian and Saberi
had given a FPTAS for the problem of computing a market
equilibrium in the ArrowDebreu setting, when the utilities are linear functions. Their running
time depended on the size of the numbers representing the
utilities and endowments of the buyers. In this paper, we give a
strongly polynomial time approximation scheme for this problem.
Our algorithm builds upon the main ideas
behind the algorithm in Devanur et. al.

New Convex Programs and Distributed Algorithms
for Fisher Markets with Linear and Spending Constraint Utilities
[pdf Show/Hide Abstract]
with Benjamin Birnbaum and Lin Xiao.
Working paper
In this paper we shed new light on convex programs
and distributed algorithms for Fisher markets with linear
and spending constraint utilities.
 We give a new convex program for the linear utilities
case of Fisher markets. This program easily
extends to the case of spending constraint utilities
as well, thus resolving an open question raised by
Vazirani.
 We show that the gradient descent algorithm with
respect to a Bregman divergence converges with
rate O(1/t) under a condition that is weaker than
having Lipschitz continuous gradient (which is the
usual assumption in the optimization literature for
obtaining the same rate).
 We show that the Proportional Response dynamics
recently introduced by Zhang is equivalent
to a gradient descent algorithm for solving the
new convex program. This insight also gives us better
convergence rates, and helps us generalize it to
spending constraint utilities.

Local Dynamics in Bargaining Networks via RandomTurn Games
[pdf Show/Hide Abstract]
with Elisa Celis and Yuval Peres.
In Proc.
WINE 2010
We present a new technique for analyzing the rate of convergence
of local dynamics in bargaining networks. The technique reduces
balancing in a bargaining network to optimal play in a randomturn
game. We analyze this game using techniques from martingale and
Markov chain theory. We obtain a tight polynomial bound on the rate
of convergence for a nontrivial class of unweighted graphs (the previous
known bound was exponential). Additionally, we show this technique
extends naturally to many other graphs and dynamics.

Convergence of Local Dynamics to Balanced Outcomes in Exchange Networks
[pdf  Show/Hide Abstract]
with Yossi Azar, Benjamin Birnbaum, L. Elisa Celis and Yuval Peres.
In Proc.
FOCS 2009
Bargaining games on exchange networks have been
studied by both economists and sociologists. A Balanced Out
come for such a game is an equilibrium concept that
combines notions of stability and fairness. In a recent paper,
Kleinberg and Tardos introduced balanced outcomes to the
computer science community and provided a polynomialtime
algorithm to compute the set of such outcomes. Their work left
open a pertinent question: are there natural, local dynamics that
converge quickly to a balanced outcome? In this paper, we provide
a partial answer to this question by showing that simple edge
balancing dynamics converge to a balanced outcome whenever one
exists.

The Price of Truthfulness for PayPerClick Auctions
[pdf  Show/Hide Abstract]
with Sham Kakade.
In Proc.
ACM EC 2009.
We analyze the problem of designing a truthful payperclick auction where
the clickthroughrates (CTR) of the bidders are unknown to the auction. Such an auction faces the classic explore/exploit dilemma: while gathering information about the click through rates of advertisers, the mechanism may loose revenue; however, this gleaned information may prove valuable in the future for a more profitable allocation. In this sense, such mechanisms are prime candidates to be designed using multiarmed bandit techniques. However, a naive application of multiarmed bandit algorithms would not take into account the strategic considerations of the players  players might manipulate their bids (which determine the auction's revenue) in a way as to maximize their own utility. Hence,
we consider the natural restriction that the auction be truthful.
The revenue that we could hope to achieve is the expected revenue of a
Vickrey auction that knows the true CTRs, and we define the truthful regret to be the difference
between the expected revenue of the auction and this Vickrey revenue. This work sharply characterizes what regret is achievable, under a truthful restriction. We show that this truthful restriction imposes statistical limits on the achievable regret  the achievable regret is $\tilde{\Theta}(T^{2/3})$, while for traditional bandit algorithms (without the truthful restriction) the achievable regret is $\tilde{\Theta}(T^{1/2})$ (where $T$ is the number of rounds). We term the extra $T^{1/6}$ factor, the `price of truthfulness'.

Limited and Online Supply and the Bayesian foundations of priorfree mechanism design
[pdf  Show/Hide Abstract]
with Jason Hartline.
In Proc.
ACM EC 2009.
We study auctions for selling a limited supply of a single commodity
in the case where the supply is known in advance and the case it is
unknown and must be instead allocated in an online fashion. The
latter variant was proposed by Mahdian and Saberi as a model of an
important phenomena in auctions for selling Internet advertising:
advertising impressions must be allocated as they arrive and the total
quantity available is unknown in advance. We describe the Bayesian
optimal mechanism for these variants and extend the random sampling
auction of Goldberg et al. to address the priorfree
case.

An Online Multiunit Auction with Improved Competitive Ratio
[pdf  Show/Hide Abstract]
with Sourav Chakraborty.
In Proc.
WINE 2009.
We improve the best known competitive ratio (from 1/4 to 1/2),
for the online multiunit allocation problem,
where the objective is to maximize the singleprice revenue.
Moreover, the competitive ratio of our algorithm tends to 1,
as the bidprofile tends to "smoothen".
This algorithm is used as a subroutine in designing truthful auctions
for the same setting: the allocation has to be done online,
while the payments can be decided at the end of the day.
Earlier, a reduction from the auction design problem to the allocation problem
was known only for the unitdemand case.
We give a reduction for the general case when the bidders have
decreasing marginal utilities.
The problem is inspired by sponsored search auctions.

Strategyproof costsharing Mechanisms
for Set Cover and Facility Location Games
[pdf  Show/Hide Abstract]
with Milena Mihail and Vijay Vazirani.
Decision Support Systems 39 (March 2005), pp 1122.
Prelim. version in Proc.
ACM EC 2003
Strategyproof costsharing mechanisms, lying in the core,
that recover $1/\alpha$ fraction of the cost, are
presented for the set cover and facility location games; $\alpha =
O(\log n)$ for the former and 1.861 for the latter. Our mechanisms
utilize approximation algorithms for these problems based
on the method of dualfitting.

Local Dynamics in Bargaining Networks via RandomTurn Games
[pdf Show/Hide Abstract]
with Elisa Celis and Yuval Peres.
In Proc.
WINE 2010
We present a new technique for analyzing the rate of convergence
of local dynamics in bargaining networks. The technique reduces
balancing in a bargaining network to optimal play in a randomturn
game. We analyze this game using techniques from martingale and
Markov chain theory. We obtain a tight polynomial bound on the rate
of convergence for a nontrivial class of unweighted graphs (the previous
known bound was exponential). Additionally, we show this technique
extends naturally to many other graphs and dynamics.

Monotonicity in Bargaining Networks
[pdf  Show/Hide Abstract]
with Yossi Azar, Kamal Jain and Yuval Rabani.
In Proc.
SODA 2010
We study bargaining networks, discussed in a recent paper of Kleinberg and
Tardos, from the perspective of cooperative game theory. In
particular we examine three solution concepts, the nucleolus, the core
center and the core median.
All solution concepts define unique solutions, so they provide
testable predictions.
We define a new monotonicity property that is a natural axiom of any bargaining game solution,
and we prove that all three of them satisfy this monotonicity property.
This is actually in contrast to the conventional wisdom for general cooperative games that
monotonicity and the core condition (which is a basic property that all three of them satisfy)
are incompatible with each other.
Our proofs are based on a
primaldual argument (for the nucleolus) and on the FKG inequality (for the
core center and the core median). We further observe some qualitative differences between the
solution concepts. In particular, there are cases where a strict version of
our monotonicity property is a natural axiom, but only the core center and the core median
satisfy it. On the other hand, the nucleolus is easy to compute, whereas
computing the core center or the core median is #Phard (yet it can be approximated in
polynomial time).

Convergence of Local Dynamics to Balanced Outcomes in Exchange Networks
[pdf  Show/Hide Abstract]
with Yossi Azar, Benjamin Birnbaum, L. Elisa Celis and Yuval Peres.
In Proc.
FOCS 2009
Bargaining games on exchange networks have been
studied by both economists and sociologists. A Balanced Out
come for such a game is an equilibrium concept that
combines notions of stability and fairness. In a recent paper,
Kleinberg and Tardos introduced balanced outcomes to the
computer science community and provided a polynomialtime
algorithm to compute the set of such outcomes. Their work left
open a pertinent question: are there natural, local dynamics that
converge quickly to a balanced outcome? In this paper, we provide
a partial answer to this question by showing that simple edge
balancing dynamics converge to a balanced outcome whenever one
exists.

A Computational Theory of Awareness and Decision Making
[pdf  Show/Hide Abstract]
with Lance Fortnow.
In Proc.
Theoretical Aspects of Rationality and Knowledge, TARK 2009
We exhibit a new computationalbased definition of awareness,
informally that our level of unawareness of an object is the amount
of time needed to generate that object within a certain environment.
We give several examples to show this notion matches our intuition
in scenarios where one organizes, accesses and transfers
information. We also give a formal processindependent definition of
awareness based on Levin's universal enumeration.
We show the usefulness of computational awareness by showing how it
relates to decision making, and how others can manipulate our
decision making with appropriate advertising, in particular, we show
connections to sponsored search and brand awareness. Understanding
awareness can also help rate the effectiveness of various user
interfaces designed to access information.

On the Equivalence of Competitive and Submodular markets
[pdf  Show/Hide Abstract]
with Deeparnab Chakrabarty.
In Operations Research Letters, vol. 37, no. 3, pp. 155  158, 2009.
Prelim. version in Proc.
WINE 2007
In this paper, we study competitive markets  a market is
competitive if increasing the endowment of any one buyer does not in
crease the equilibrium utility of any other buyer. In the Fisher setting,
competitive markets contain all markets with weak gross substitutabil
ity (WGS), a property which enable efficient algorithms for equilibrium
computation.
We show that every uniform utility allocation (UUA) market which is
competitive, is a submodular utility allocation (SUA) market. Our result
provides evidence for the existence of efficient algoritheorems for the class
of competitive markets.

Price of Anarchy, Locality Gap, and
a Network Service Provider Game
[pdf  Show/Hide Abstract]
with Naveen Garg, Rohit Khandekar,
Vinayaka Pandit, Amin Saberi, and Vijay V. Vazirani.
In Proc.
WINE 2005
In this paper, we define a network service provider game. We show that
the price of anarchy of the defined game can be bounded by analyzing
a local search heuristic for a related facility location
problem called the $k$facility location problem. As a result, we show
that the $k$facility location problem has a locality gap of 5. This
result is of interest on its own. Our result gives evidence to the
belief that the price of anarchy of certain games are related to
analysis of local search heuristics.

Fast Algorithms for Finding Matchings in Lopsided
Bipartite Graphs with Applications to Display Ads
[pdf Show/Hide Abstract]
with Denis Charles, Max Chickering, Kamal Jain and Manan Sanghi.
In Proc.
ACM EC 2010
We derive efficient algorithms for both detecting and representing
matchings in lopsided bipartite graphs; such graphs have so many
nodes on one side that it is infeasible to represent them in memory or
to identify matchings using standard approaches. Detecting and
representing matchings in lopsided bipartite graphs is important for
allocating and delivering guaranteedplacement display
ads, where the corresponding bipartite graph of interest has nodes
representing advertisers on one side and nodes representing webpage
impressions on the other; realworld instances of such graphs can have
billions of impression nodes. We provide theoretical guarantees for
our algorithms, and in a realworld advertising application, we
demonstrate the feasibility of our detection algorithms.

The Adwords Problem: Online Keyword Matching
with Budgeted Bidders under Random Permutations
[pdf  Show/Hide Abstract]
with Tom Hayes.
In Proc.
ACM EC 2009.
We consider the problem of a search engine trying to assign a sequence
of search keywords to a set of competing bidders, each with a daily
spending limit. The goal is to maximize the revenue generated by these
keyword sales, bearing in mind that, as some bidders may eventually
exceed their budget, not all keywords should be sold to the highest bidder.
We assume that the sequence of keywords (or equivalently, of bids)
is revealed online. Our concern will be the competitive ratio for this
problem versus the offline optimum.
We extend the current literature on this problem by considering the setting
where the keywords arrive in a random order. In this setting we are able to
achieve a competitive ratio of $1\epsilon$ under some mild, but necessary,
assumptions. In contrast, it is already known that when the keywords arrive
in an adversarial order, the best competitive ratio is bounded away from 1.
Our algorithm is motivated by PAC learning, and proceeds in two parts: a
training phase, and an exploitation phase.

New GeometryInspired Relaxations and Algorithms for the Metric Steiner Tree Problem.
[pdf  Show/Hide Abstract]
with Deeparnab Chakrabarty, and Vijay Vazirani.
To appear in Math Programming (Series A) Volume 122, Number 2 (April 2010).
(Preliminary version in IPCO 2008. )
Determining the integrality gap of the bidirected cut relaxation for the metric Steiner tree
problem, and exploiting it algorithmically, is a longstanding open problem. We use geometry
to define an LP whose dual is equivalent to this relaxation. This opens up the possibility of
using the primaldual schema in a geometric setting for designing an algorithm for this problem.
Using this approach, we obtain a 4/3 factor algorithm and integrality gap bound for the case
of quasibipartite graphs; the previous best integrality gap upper bound being 3/2. We
also obtain a factor $sqrt{2}$ strongly polynomial algorithm for this class of graphs.
A key difficulty experienced by researchers in working with the bidirected cut relaxation
was that any reasonable dual growth procedure produces extremely unwieldy dual solutions.
A new algorithmic idea helps finesse this difficulty  that of reducing the cost of certain edges
and constructing the dual in this altered instance  and this idea can be extracted into a new
technique for running the primaldual schema in the setting of approximation algorithms.

Integrality Gaps for Sparsest Cut and Minimum Linear Arrangement Problems
[pdf  Show/Hide Abstract]
with Subhash A. Khot, Rishi Saket and Nisheeth K. Vishnoi.
In Proc.
STOC 2006.
Arora, Rao and Vazirani
showed that the standard semidefinite programming (SDP)
relaxation of the sparsest cut problem with the triangle
inequality
constraints has an integrality gap of $O(\sqrt{\log n})$. They
conjectured that the gap is bounded from above by a constant. In this paper, we disprove
this conjecture
by constructing an $\Omega(\log \log n)$ integrality gap
instance. Khot and Vishnoi had earlier disproved the
nonuniform version of this Conjecture.
A simple ``stretching'' of the integrality gap instance for the
sparsest cut problem serves as an $\Omega(\log \log n)$ integrality gap
instance for the SDP relaxation of the Minimum Linear Arrangement problem. This
SDP relaxation was considered in Charikar et. al. and Feige and Lee, where it was shown that its integrality gap is bounded
from above by $O(\sqrt{\log n} \log \log n).$

On Computing the Distinguishing Numbers of Planar Graphs and Beyond: a Counting Approach
[pdf  Show/Hide Abstract]
with V. Arvind and Christine Cheng.
In
SIAM Journal on Discrete Mathematics, vol. 22, no. 4 (October 2008), pp. 12971324.
A vertex klabeling of graph G is distinguishing if the only automorphism that preserves the labels
of G is the identity map. The distinguishing number of G, D(G), is the smallest integer k for which G
has a distinguishing klabeling. In this paper, we apply the principle of inclusionexclusion and develop
recursive formulas to count the number of inequivalent distinguishing klabelings of a graph. Along the
way, we prove that the distinguishing number of a planar graph can be computed in time polynomial in
the size of the graph.

On the complexity of Hilbert's 17th
problem
[pdf  Show/Hide Abstract]
with Richard J. Lipton and Nisheeth Vishnoi.
In Proc.
FSTTCS 2004.
Hilbert posed the following problem as the 17th in the list of 23 problems in his famous 1900 lecture:
Given a
multivariate polynomial that takes only nonnegative values over
the reals, can it be represented as a sum of squares of rational
functions?
In 1927, E.~Artin gave an affirmative answer to this question.
His result guaranteed the existence of such a finite representation and
raised the following important question:
What is the {\bf minimum number} of rational functions needed to represent
any nonnegative $n$variate, degree $d$ polynomial?
In 1967, Pfister proved that any $n$variate nonnegative
polynomial over the reals can be written as sum of squares of at most $2^n$ rational functions.
In spite of a considerable effort by
mathematicians for over 75 years, it is not known
whether $n+2$ rational functions are sufficient!
In lieu of the lack of progress towards the resolution of this question, we initiate the study of Hilbert's 17th problem from the point
of view of Computational Complexity.
In this setting, the following question is a natural relaxation:
What is the {\bf descriptive complexity} of the sum of squares representation (as rational functions) of
a nonnegative, $n$variate, degree $d$ polynomial?
We consider arithmetic circuits as a natural representation of rational functions.
We are able to show, assuming a standard conjecture in complexity theory, that it is impossible that every nonnegative, $n$variate, degree four
polynomial can be represented as a sum of squares of a small (polynomial in $n$) number of rational functions, each of which has a small size arithmetic circuit (over the rationals) computing it.
Our result points to the direction that it is unlikely
that every nonnegative, $n$variate polynomial over the reals can be written as a sum of squares of a polynomial (in $n$) number of rational functions. Further, relating to standard (and believed to be hard to prove) complexitytheoretic conjectures sheds some light on why it has been difficult for mathematicians to close the $n+2$ and $2^n$ gap. We hope that our line of work will play an important role in the resolution of this question.
