Accepted
Papers with Abstracts
Pricing Traffic in a Spanning Network
Herve Moulin,
Rice University
Each user of the network needs to connect a pair of target nodes. There are
no variable congestion costs, only a direct connection cost for each pair of
nodes. A centralized mechanism elicits target pairs from users, and builds the
cheapest forest meeting all demands. We look for cost sharing rules satisfying Routing-proofness: no user can lower its cost
by reporting as several users along an alternative path connecting his target
nodes; Stand Alone core stability: no
group of users pay more than the cost of a sub-network meeting all connection
needs of the group. We construct first
two core stable and routing-proof rules when connecting costs are all 0 or 1.
One is derived from the random spanning tree weighted by the volume of traffic
on each edge; the other is the weighted Shapley value of the Stand Alone
cooperative game. For arbitrary
connecting costs, we prove that the core is non empty if the graph of target
pairs connects all pairs of nodes. Then we extend both rules above by the
piecewise-linear technique. The former rule is computable in polynomial time,
the latter is not.
Modeling Volatility in Prediction Markets
Nikolay Archak,
New York University
Panagiotis Ipeirotis, New York University
Nowadays, there is a significant experimental evidence that prediction
markets are efficient mechanisms for aggregating information and are more
accurate in forecasting events than traditional forecasting methods.
Interpretation of market prices as probabilities has been extensively studied
in the literature. However there is very little research on the volatility of
prediction market prices. Given that volatility is fundamental in estimating
the significance of price movements, it is important to have a better
understanding of volatility of the contract prices. In this paper, we present a model of a
prediction market with a binary payoff on a competitive event involving two
parties. In our model, each party has underlying "ability'' process that
describes its potential to win and evolves as an Ito diffusion, a generalized
form of a Brownian motion. We show that, if the prediction market is efficient
and unbiased, the price of the corresponding contract also follows a diffusion
and its instantaneous volatility is a function of the current contract price
and its time to expiration. In the experimental
section, we validate our model on a set of InTrade prediction markets and show
that it is consistent with observed volatility of contract returns and
outperforms existing volatility models in predicting future contract volatility
from historical price data. To demonstrate the practical value of our model, we
apply it to pricing options on prediction market contracts, such as those
recently introduced by InTrade. Other potential applications of this model
include detection of significant market moves and improving forecast standard
errors.
But Who Will Monitor the Monitor?
David Rahman,
University of Minnesota
Consider a group of individuals in a strategic environment with moral
hazard and adverse selection, and suppose that providing incentives for a given
outcome requires a monitor to detect deviations. What about the monitor’s
deviations? This paper analyzes mediated contracts to answer such a question,
and asserts that the monitor's deviations are effectively irrelevant. Hence,
nobody needs to monitor the monitor. The paper also characterizes exactly when
such contracts can provide monitors with the right incentives to perform. In
doing so, several characterizations of virtual implementation are derived.
Policy Teaching Through Reward Function Learning
Haoqi Zhang,
Harvard University
David Parkes,
Harvard University
Yiling Chen, Harvard University
Policy teaching [21] considers a Markov Decision Process setting in which
an interested party aims to influence an agent's decisions by providing limited
incentives. In this paper, we consider the specific objective of inducing a
pre-specified desired policy. We examine
both the case in which the agent's reward function is known and unknown to the
interested party, presenting a linear program for the former case and
formulating an active, indirect elicitation method for the latter. We provide conditions for convergence and
logarithmic convergence, and present a polynomial time algorithm that ensures
logarithmic elicitation convergence with arbitrarily high probability (neither
guarantees are available for the alternate, value-based policy teaching model
of Zhang and Parkes [21]). We adapt a two-sided slack maximization heuristic to
this setting, and demonstrate its effectiveness in a simulated ad-network
setting. We extend our methods to settings with partial observations and
partial target policies, and offer a game-theoretic interpretation of our
methods.
Eliciting Truthful Answers to Multiple-Choice
Questions
Nicolas Lambert,
Stanford University
Yoav Shoham,
Stanford University
Motivated by the prevalence of online questionnaires in electronic
commerce, and of multiple-choice questions in such questionnaires, we consider
the problem of eliciting truthful answers to multiple-choice questions from a
knowledgeable respondent. Specifically, each question is a statement regarding
an uncertain future event, and is multiple-choice -- the responder must select
exactly one of the given answers. The principal offers a payment, whose amount
is a function of the answer selected and the true outcome (which the principal
will eventually observe). This problem significantly generalizes recent work on
truthful elicitation of distribution properties, which itself generalized a
long line of work in elicitation of complete distributions. Our results include
necessary and sufficient conditions for the existence of payments that induce
truthful answers, and provide various characterizations of those payments. We
study in greater details the common case of questions with ordinal answers.
Finally, we design prediction markets adapted for the purpose of answering
multiple-choice questions.
Computational Analysis of Perfect-Information Position
Auctions
David Thompson,
University of British Columbia
Kevin Leyton-Brown,
University of British Columbia
Position auctions were widely used by search engines to sell keyword
advertising before being well understood (and, indeed, studied)
theoretically. To date, theorists have
made significant progress, for example showing that a given auction is
efficient or revenue-dominates a benchmark auction such as VCG. This paper augments that line of work,
relying on computational equilibrium analysis.
By computing Nash equilibria and calculating their expected revenue and
social welfare, we can quantitatively answer questions that theoretical methods
have not. Broadly, the questions we
answer are: (1) How often do the theoretically predicted ``good'' (i.e.
efficient, high-revenue) equilibria of GSP occur? (2) In models where GSP is known to be
inefficient, how much welfare does it waste?
We also use our data to examine the larger question: Is GSP a good
choice, compared with the alternatives?
A New Perspective on Implementation by Voting Trees
Felix Fischer,
Ludwig-Maximilians University
Ariel Procaccia,
Microsoft
Alex Samorodnitsky,
Hebrew University
Voting trees describe an iterative procedure for selecting a single vertex
from a tournament. They provide a very
general abstract model of decision-making among a group of individuals, and it
has therefore been studied for which voting rules there exists a tree
implementing them, i.e., one that chooses according to the rule for every
tournament. While partial results
concerning implementable rules and necessary conditions for implementability
have been obtained over the past forty years, a complete characterization of
voting rules implementable by trees has proven surprisingly hard to find. A prominent rule that cannot be implemented
by trees is the Copeland rule, which singles out vertices with maximum
degree. In this paper, we suggest a
new angle of attack and re-examine the implementability of the Copeland
solution using paradigms and techniques that are at the core of theoretical
computer science. Indeed, we study the
extent to which voting trees can approximate the maximum degree in a
tournament, and give upper and lower bounds on the worst-case ratio between the
degree of the vertex chosen by a tree and the maximum degree, both for the
deterministic model concerned with a single fixed tree, and for randomizations
over arbitrary sets of trees. Our main
positive result is a randomization over surjective trees of polynomial size
that provides an approximation ratio of at least 1/2. The proof is based on a
connection between a randomization over caterpillar trees and a rapidly mixing
Markov chain.
Approximate Mechanism Design Without Money
Ariel Procaccia,
Microsoft
Moshe Tennenholtz,
Microsoft
The literature on algorithmic mechanism design is mostly concerned with
game-theoretic versions of optimization problems to which standard economic
money-based mechanisms cannot be applied efficiently. Recent years have seen
the design of various truthful approximation mechanisms that rely on enforcing
payments. In this paper, we advocate the reconsideration of highly structured
optimization problems in the context of mechanism design. We argue that, in
such domains, approximation can be leveraged to obtain truthfulness without
resorting to payments. This stands in contrast to previous work where payments
are ubiquitous, and (more often than not) approximation is a necessary evil
that is required to circumvent computational complexity. We present a case study in approximate
mechanism design without money. In our basic setting agents are located on the
real line and the mechanism must select the location of a public facility; the
cost of an agent is its distance to the facility. We establish tight upper and
lower bounds for the approximation ratio given by strategyproof mechanisms
without payments, with respect to both deterministic and randomized mechanisms,
under two objective functions: the social cost, and the maximum cost. We then
extend our results in two natural directions: a domain where two facilities
must be located, and a domain where each agent controls multiple locations.
Information Aggregation in Dynamic Markets with
Strategic Traders
Michael Ostrovsky,
Stanford University
This paper studies information aggregation in dynamic markets with
partially informed strategic traders. A
natural condition on traded securities and information structures,
"separability," is introduced.
If a security is separable under a given information structure, then
information about its value always gets aggregated, for any prior distribution
over the states of the world: In equilibrium, the market price of the security
converges to its expected value given the traders' pooled information. If the security is non-separable, then there
exists a prior such that information does not get aggregated. Special cases satisfying separability include
Arrow-Debreu securities, whose value is one in one state of the world and zero
in all others, and "additive" securities, whose value is the sum of
traders' signals.
Social Lending
Ning Chen,
Nanyang Technological University
Arpita Ghosh,
Yahoo! Research
Nicolas Lambert,
Stanford University
Prosper, the largest online social lending marketplace with nearly a
million members and $178 million in funded loans, uses an auction amongst
lenders to finance each loan. In each auction, the borrower specifies D, the
amount he wants to borrow, and a maximum acceptable interest-rate R. Lenders
specify the amounts a_i they want to lend, and bid on the interest-rate, b_i,
they're willing to receive. Given that a basic premise of social lending is
cheap loans for borrowers, how does the Prosper auction do in terms of the
borrower's payment, when lenders are strategic agents with private true
interest rates? The Prosper mechanism is
exactly the same as the VCG mechanism applied to a modified instance of the
problem. However, the two mechanisms behave very differently --- VCG is
truthful, whereas Prosper is not, and the total payment can be vastly
different. We first provide a complete analysis of the Nash equilibria of the
Prosper mechanism. Next, we show that while the payment in the VCG mechanism is
always within a factor of O(log D) of that in any equilibrium of Prosper, even
the cheapest Nash equilibrium of Prosper can be as large as a factor D of VCG;
both factors are tight. Thus, while the Prosper mechanism is a simple uniform
price mechanism, it can lead to much larger payments than VCG. Finally, we
provide a model to study Prosper as a dynamic auction, and give tight bounds on
the price for a general class of bidding strategies.
On the Complexity of Nash Dynamics and Sink equilibria
Vahab Mirrokni,
Google
Alexander Skopalik,
RWTH Aachen
Studying Nash dynamics is an important approach for analyzing the outcome
of games with repeated behavior of self-interested agents. Sink equilibria have
been introduced for studying social cost on Nash dynamics over pure strategies
in games. However, they do not address the complexity of sink equilibria in these games. Recently,
Fabrikant and Papadimitriou study complexity of Nash dynamics in two
classes of games. In order to understand the complexity of Nash dynamics in a
variety of games, we study the following three questions: (i) given a state in
game, can we verify if this state is in a sink equilibrium? (ii) given an
instance of a game, can we verify if there exists any sink equilibrium other
than pure Nash equilibria? and (iii)
given an instance of a game, can we verify if there exists a pure Nash
equilibrium? In this paper, we almost
answer all of the above questions for a variety of classes of games with succinct
representation, including anonymous
games, player specific and weighted congestion games, valid-utility games, and
two-sided market games. In particular, for
most of these problems, we show that (i) it is PSPACE-complete to verify if a
given state is in a sink equilibrium,
(ii) it is NP-hard to verify if there exists a pure Nash equilibrium in
the game, (iii) it is PSPACE-complete to verify if there exists any sink
equilibrium other than pure Nash equilibria. We illustrate general techniques
that could be used to answer similar
questions in other classes of games.
Self-Correcting Sampling-Based Dynamic Multi-Unit
Auctions
Florin Constantin,
Harvard
David Parkes,
Harvard
In this paper we exploit methods of sample-based stochastic optimization
for the purpose of strategyproof dynamic, multi-unit auctions. There are no
analytic characterizations of optimal policies for this domain and thus a
heuristic approach, such as that proposed here, seems necessary in practice.
Following the suggestion of Parkes and Duong in an earlier paper, we perform
sensitivity analysis on the allocation decisions of an online algorithm for
stochastic optimization, and correct the decisions to enable a strategyproof
auction. In applying this approach to the allocation of non-expiring goods, the
technical problem that we must address is related to achieving
strategyproofness for reports of departure. This cannot be achieved through
self-correction without canceling many allocation decisions, and must instead
be achieved by first modifying the underlying algorithm. We introduce the
NowWait method for this purpose, prove its successful interfacing with
sensitivity analysis and demonstrate good empirical performance. Our method is
quite general, requiring a technical property of uncertainty independence and
that values are not too positively correlated with agent patience. We also show
how to incorporate “virtual valuations” (due to Myerson) in our method to
increase the seller’s revenue.
Managing the Quality of CPC Traffic
Bobji Mungamuru,
Stanford University
Hector Garcia-Molina,
Stanford University
We show how an online advertising network can use filtering, predictive
pricing and revenue sharing together to manage the quality of cost-per-click
(CPC) traffic. Our results suggest that predictive pricing alone can and should
be used instead of filtering to manage organic traffic quality, whereas either
method can be used to deter click inflation.
An Exact Almost Optimal Algorithm for Target Set Selection
in Social Networks
Oren Ben-Zwi,
Haifa University
Danny Hermelin, Haifa University
Daniel Lokshtanov,
University of Bergen
Ilan Newman,
Haifa University
The Target Set Selection problem proposed by Kempe, Kleinberg, and Tardos,
gives a nice clean combinatorial formulation for many problems arising in
economy, sociology, and medicine. Its input is a graph with vertex thresholds,
the social network, and the goal is to find a subset of vertices, the target
set, that “activates” a prespecified number of vertices in the graph.
Activation of a vertex is defined via a so-called activation process as
follows: Initially, all vertices in the target set become active. Then at each
step i of the process, each vertex gets activated if the number of its active neighbors
at iteration i - 1 exceeds its threshold.
Unsurprisingly perhaps, Target Set Selection is NP-complete. More
surprising is the fact that both of its maximization and minimization variants
turn out to be extremely hard to approximate, even for very restrictive special
cases. Our contribution is twofold:
First, we present an algorithm for Target Set Selection running in n^O(w) time,
for graphs with n vertices and treewidth bounded by w. The algorithm utilizes various
combinatorial properties of the problem; drifting somewhat from standard
dynamic-programming algorithms for small treewidth graphs. Also, it can be
adopted to much more general settings, including the case of directed graphs,
weighted edges, and weighted vertices. On the other hand, we also show that it
is highly unlikely to find an n^o(sqrt{w}) time algorithm for Target Set
Selection, as this would imply a sub-exponential algorithm for all problems in
SNP.
Revenue Submodularity
Mukund Sundararajan,
Stanford
Tim Roughgarden,
Stanford
Shaddin Dughmi,
Stanford
We introduce {\em revenue submodularity}, the property that market
expansion has diminishing returns on an auction's expected revenue. We prove
that revenue submodularity is generally possible only in matroid markets, that Bayesian-optimal
auctions are always revenue-submodular in such markets, and that the VCG
mechanism is revenue-submodular in matroid markets with IID bidders and
``sufficient competition''. We also give
two applications of revenue submodularity: good approximation algorithms for
novel market expansion optimization problems, and approximate revenue
guarantees for the VCG mechanism with IID bidders.
On Random Sampling Auctions for Digital Goods
Saeed Alaei,
UMD-CP
Azarakhsh Malekian,
UMD-CP
Aravind Srinivasan,
UMD-CP
In the context of auctions for digital goods, an interesting Random
Sampling Optimal Price auction (RSOP) has been proposed by Goldberg, Hartline
and Wright; this leads to a truthful mechanism. Since random sampling is a
popular approach for auctions that aims to maximize the seller's revenue, this
method has been analyzed further by Feige, Flaxman, Hartline and Kleinberg, who
have shown that it is $15$-competitive in the worst case -- which is
substantially better than the previously proved bounds but still far from the
conjectured competitive ratio of $4$. In this paper, we prove that RSOP is
indeed $4$-competitive for a large class of instances in which there are at
least 6 bids greater than or equal to the optimal price. We also show that it is
$4.68$ competitive for the small class of remaining instances thus leaving a
negligible gap between the lower and upper bound. We employ a mix of probabilistic techniques
and dynamic programming to compute those bounds.
Crowdsourcing and All-Pay Auctions
Dominic DiPalantino,
Stanford University
Milan Vojnovic,
Microsoft Research Cambridge
In this paper we present and analyze a model in which users select among,
and subsequently compete in, a collection of contests offering various
rewards. The objective is to capture the
essential features of a crowdsourcing system, an environment in which diverse
tasks are presented to a large community.
We aim to demonstrate the precise relationship between incentives and
participation in such systems. We model
contests as all-pay auctions with incomplete information; as a consequence of
revenue equivalence, our model may also be interpreted more broadly as one in
which users select among auctions of heterogeneous goods. We present two regimes in which we find an
explicit correspondence in equilibrium between the offered rewards and the
users' participation levels. The regimes
respectively model situations in which different contests require similar or
unrelated skills. Principally, we find
that rewards yield logarithmically diminishing returns with respect to
participation levels. We compare these
results to empirical data from the crowdsourcing site Taskcn.com; we find that
as we conditionalize the data on more experienced users, the model more closely
conforms to the empirical data.
On the Price of Mediation
Milan Bradonjic,
Los Alamos National Labs
Gunes Ercal-Ozkaya,
Kansas University
Adam Meyerson,
UCLA
Alan Roytman,
UCLA
We study the relationship between the social cost of correlated equilibria
and the social cost of Nash equilibria. In contrast to previous work focusing
on the possible benefits of a benevolent mediator, we define and bound the
Price of Mediation (PoM): the ratio of the cost of the worst correlated
equilibrium to the cost of the worst Nash. We observe that in practice, the
heuristics used for mediation are frequently non-optimal, and from an economic
perspective mediators may be inept or self-interested. Recent results on
computation of equilibria also motivate our work. We consider the Price of Mediation for
general games with small numbers of players and pure strategies. For games with
two players each having two pure strategies we prove a tight bound of two on
the PoM. For larger games (either more players, or more pure strategies per
player, or both) we show that the PoM can be unbounded. Most of our results focus on symmetric
congestion games (also known as load balancing games). We show that for general
convex cost functions, the PoM can grow exponentially in the number of players.
We prove that PoM is one for linear costs and at most a small constant (but can
be larger than one) for concave costs. For polynomial cost functions, we prove
bounds on the PoM which are exponential in the degree.
A Unified Framework for Dynamic Pari-Mutuel Information
Market Design
Shipra Agrawal,
Stanford University
Erick Delage,
Stanford University
Mark Peters,
Stanford University
Zizhuo Wang,
Stanford University
Yinyu Ye,
Stanford University
Recently, several new pari-mutuel mechanisms have been introduced to organize
markets for contingent claims. Hanson [7] introduced a market maker derived
from the logarithmic scoring rule, and later Chen & Pennock [5] developed a
cost function formulation for the market maker. On the other hand, the SCPM
model of Peters et al. [9] is based on ideas from a call auction setting using
a convex optimization model. In this work, we develop a unified framework that
bridges these seemingly unrelated models for centrally organizing contingent
claim markets. The framework, developed as a generalization of SCPM, will
support many desirable properties such as proper scoring, truthful bidding (in
a myopic sense), efficient computation, and guarantees on worst case loss. In
fact, our unified framework will allow us to express various proper scoring
rules, existing or new, from classical utility functions in a convex
optimization problem representing the market organizer. Additionally, we
utilize concepts from duality to show that the market model is equivalent to a
risk minimization problem where a convex risk measure is employed. This will
allow us to more clearly understand the differences in the risk attitudes
adopted by various mechanisms, and particularly deepen our intuition about
popular mechanisms like Hanson’s market-maker. In aggregate, we believe this
work advances our understanding of the objectives that the market organizer is
optimizing in popular parimutuel mechanisms by recasting them into one unified
framework.
Characterizing Truthful Multi-Armed Bandit Mechanisms
Moshe Babaioff,
Microsoft Research
Yogeshwer Sharma, Cornell University
Aleksandrs Slivkins, Microsoft Research
We consider a multi-round auction setting motivated by pay-per-click
auctions for Internet advertising. In each round the auctioneer selects an
advertiser and shows her ad, which is then either clicked or not. An advertiser
derives value from clicks; the value of a click is her private information.
Initially, neither the auctioneer nor the advertisers have any information
about the likelihood of clicks on the advertisements. The auctioneer's goal is
to design a (dominant strategies) truthful mechanism that (approximately)
maximizes the social welfare. If the
advertisers bid their true private values, our problem is equivalent to the
multi-armed bandit problem, and thus can be viewed as a strategic version of
the latter. In particular, for both problems the quality of an algorithm can be
characterized by "regret", the difference in social welfare between
the algorithm and the benchmark which always selects the same "best"
advertisement. We investigate how the design of multi-armed bandit algorithms
is affected by the restriction that the resulting mechanism must be truthful.
We find that truthful mechanisms have certain strong structural properties --
essentially, they must separate exploration from exploitation -- *and* they
incur much higher regret than the optimal multi-armed bandit algorithms.
Moreover, we provide a truthful mechanism which (essentially) matches our lower
bound on regret.
Substitutes or Complements: Another Step Forward in
Recommendations
Jiaqian Zheng,
Fudan University
Xiaoyuan Wu,
eBay Research Lab
Junyu Niu,
Fudan University
Alvaro Bolivar,
eBay Research Lab
In this paper, we introduce the method tagging substitute-complement
attributes on miscellaneous recommending relations, and elaborate how this step
contributes to electronic merchandising.
There are already decades of works in building recommender systems.
Steadily outperforming previous algorithms is difficult under the conventional
framework. However, in real merchandising scenarios, we find describing the
weight of recommendation simply as a scalar number is hardly expressive, which
hinders the further progress of recommender systems. We study a large log of user browsing data, revealing
the typical substitute-complement
relations among items that can further extend recommender systems in
enriching the presentation and improving the practical quality. Finally, we
provide an experimental analysis and sketch an online prototype to show that
tagging attributes can grant more intelligence to recommender systems by
differentiating recommended candidates to fit respective scenarios.
Designing Incentives for Online Question and Answers
Forums
Shaili Jain,
Harvard University
Yiling Chen,
Harvard University
David Parkes,
Harvard University
In this paper, we provide a simple game-theoretic model of an online
question and answer forum. Each user has a unique piece of information that is
relevant to a question and can decide when to report this information. The
asker prefers to receive information sooner rather than later, and will stop
the process when satisfied with the cumulative value of the posted
information. The information aggregates
as it is posted, with previous pieces of information combined into the latest
answers. We consider two distinct cases: a complements case, in which each
successive piece of information is worth more to the asker than the previous
one; and a substitutes case, in which each successive piece of information is worth
less. A ``best answer'' scoring rule is
adopted to model Yahoo! Answers, and is effective for substitutes information,
where it isolates an equilibrium in which all users respond in the first round.
But we find that this rule is ineffective for complements information,
isolating instead an equilibrium in which all users respond in the final round.
In addressing this, we demonstrate that an ``approval voting'' scoring rule and
a ``buyer-distributes-the-points'' scoring rule can improve the performance of
the system with complements information, by providing incentives for early
responders as well as the user to submit the final answer.
Destroy to Save
Geoffroy de
Clippel, Brown University
Victor Naroditskiy,
Brown University
Amy Greenwald,
Brown University
We study the problem of how to allocate $m$ identical items among $n>m$
agents, assuming each agent desires exactly one item and has a private value
for consuming the item. We assume the
items are jointly owned by the agents, not by one uninformed center, so an
auction cannot be used to solve our problem.
Instead, the agents who receive items compensate those who do not. This problem has been studied by others recently,
and their solutions have modified the classic VCG mechanism. Advantages of this approach include
strategy-proofness and allocative efficiency.
Further, in an auction setting, VCG guarantees budget balance, because
payments are absorbed by the center. In
our setting, however, where payments are redistributed to the agents, some
money must be burned in order to retain strategy-proofness. Our key finding is that destroying items can
save money, and hence lead to greater social surplus. More specifically, our
first observation is that a mechanism is strategy-proof iff it admits a
threshold representation. Given this
observation, we restrict attention to specific threshold and payment functions
for which we can numerically solve for an optimal mechanism. Whereas the worst-case ratio of the realized
social surplus to the maximum possible is close to 1 when $m=1$ and 0 when
$m=n-1$ under the VCG mechanism, the best mechanism we find coincides with VCG
when $m=1$ but has a ratio approaching 1 when $m=n-1$ as $n$ increases.
The Adwords Problem:
Online Keyword Matching with Budgeted Bidders under Random Permutations
Nikhil Devanur,
Microsoft Research
Thomas Hayes,
University of New Mexico
We consider the problem of a search engine trying to assign a sequence of
search keywords to a set of competing bidders, each with a daily spending
limit. The goal is to maximize the
revenue generated by these keyword sales, bearing in mind that, as some bidders
may eventually exceed their budget, not all keywords should be sold to the
highest bidder. We assume that the sequence of keywords (or equivalently, of
bids) is revealed on-line. Our concern
will be the competitive ratio for this problem versus the off-line
optimum. We extend the current
literature on this problem by considering the setting where the keywords arrive
in a random order. In this setting we
are able to achieve a competitive ratio of $1-\epsilon$ under some mild, but
necessary, assumptions. In contrast, it
is already known that when the keywords arrive in an adversarial order, the best
competitive ratio is bounded away from 1. Our algorithm is motivated by PAC
learning, and proceeds in two parts: a training phase, and an exploitation
phase.
The Price of Truthfulness for Pay-Per-Click Auctions
Nikhil Devanur,
Microsoft Research
Sham Kakade,
TTI Chicago
We analyze the problem of designing a truthful pay-per-click auction where
the click-through-rates (CTR) of the bidders are unknown to the auction. Such
an auction faces the classic explore/exploit dilemma: while gathering
information about the click through rates of advertisers, the mechanism may
loose revenue; however, this gleaned information may prove valuable in the
future for a more profitable allocation. In this sense, such mechanisms are
prime candidates to be designed using multi-armed bandit techniques. However, a
naive application of multi-armed bandit algorithms would not take into account
the \emph{strategic} considerations of the players --- players might manipulate
their bids (which determine the auction's revenue) in a way as to maximize
their own utility. Hence, we consider
the natural restriction that the auction be truthful. The revenue that we could hope to achieve is
the expected revenue of a Vickerey auction that knows the true CTRs, and we
define the \emph{2nd price regret} to be the difference between the expected
revenue of the auction and this Vickerey revenue. This work sharply
characterizes what regret is achievable, under a truthful restriction. We show
that this truthful restriction imposes \emph{statistical} limits on the
achievable regret --- the achievable regret is $\Theta^*(T^{2/3})$, while for
traditional bandit algorithms (without the truthful restriction) the achievable
regret is $\Theta^*(T^{1/2})$ (where $T$ is the number of rounds). We term the
extra $T^{1/6}$ factor, the `price of truthfulness'.
Limited and Online Supply and the Bayesian foundations
of prior-free mechanism design
Nikhil Devanur,
Microsoft Research
Jason Hartline,
Northwestern University
We study auctions for selling a limited supply of a single commodity in
the case where the supply is known in advance and the case it is unknown and
must be instead allocated in an online fashion.
The latter variant was proposed by Madhian and Saberi as a model of an
important phenomena in auctions for selling Internet advertising: advertising
impressions must be allocated as they arrive and the total quantity available
is unknown in advance. We describe the
Bayesian optimal mechanism for these variants and extend the random sampling
auction of Goldberg et al.~\cite{GHW-01} to address the prior-free case.
A Qualitative Vickrey Auction
Paul Harrenstein,
University of Munich
Mathijs de
Weerdt, Delft University of Technology
Vincent Conitzer,
Duke University
Restricting the preferences of the agents by assuming that their
utility functions linearly depend on a
payment allows for the positive results of
the Vickrey auction and the Vickrey-Clarke-Groves mechanism. Such quasilinear preferences, however, involve
the controversial assumption that there
is some commonly desired commodity or numeraire---money, shells, beads, etcetera---which is
commensurable with utility. We propose
a generalization of the Vickrey auction that does not assume that the agents' preferences are
quasilinear, but still has some of its
desirable properties. In this auction, a bid can be any
alternative, rather than just a
monetary offer. Such an auction is also applicable to situations where no numeraire is available,
where there is a fixed budget, or where
money is no issue. We show that in two
general settings, this qualitative Vickrey auction has
a dominant strategy equilibrium,
invariably yields a weakly Pareto efficient outcome in this equilibrium, and is individually rational.
The first setting is one where the
center has a linear preference order over a finite set of alternatives, and the second setting is when
the bidders' preferences can be
represented by continuous utility functions over a closed metric space of alternatives and the center's
utility is equipeaked. The traditional
Vickrey auction turns out to be a
special case of this second setting.
Selling Ad Campaigns: Online Algorithms with
Cancellations
Moshe Babaioff,
Micosoft Research
Jason Hartline,
Northwestern Univ.
Robert Kleinberg,
Cornell University
We study online pricing problems in markets with cancellations, i.e.,
markets in which prior allocation decisions can be revoked, but at a cost. In our model, a seller receives requests
online and chooses which requests to accept, subject to constraints on the
subsets of requests which may be accepted simultaneously. A request, once accepted, can be canceled at
a cost which is a fixed fraction of the request value. This scenario models a market for web
advertising campaigns, in which the buyback cost represents the cost of
cancelling an existing contract. We
analyze a simple constant-competitive algorithm for a single-item auction in
this model, and we prove that its competitive ratio is optimal among
deterministic algorithms, but that the competitive ratio can be improved using
a randomized algorithm. We then model ad campaigns using knapsack domains, in
which the requests differ in size as well as in value. We give a deterministic
online algorithm that achieves a bi-criterion approximation in which both
approximation factors approach $1$ as the buyback factor and the size of the
maximum request approach $0$. We show
that the bi-criterion approximation is unavoidable for deterministic
algorithms, but that a randomized algorithm is capable of achieving a constant
competitive ratio. Finally, we discuss an extension of our randomized algorithm
to matroid domains (in which the sets of simultaneously satisfiable requests
constitute the independent sets of a matroid) as well as present results for
domains in which the buyback factor of different requests varies.
The Price of Uncertainty
Maria-Florina Balcan, Microsoft Research
Avrim Blum,
Carnegie Mellon University
Yishay Mansour,
Tel-Aviv University and Google Research
In this work we study the degree to which small fluctuations in costs in
well-studied potential games can impact the result of natural best-response and
improved-response dynamics. We call this
the {\em Price of Uncertainty} and study it in a wide variety of potential
games (including fair cost-sharing games, set-cover games, routing games, and
job-scheduling games), finding a number of surprising results. In particular,
we show that in certain cases, even extremely small fluctuations can cause
these dynamics to spin out of control and move to states of much higher social
cost, whereas in other cases these dynamics are much more stable even to large
degrees of fluctuation. We also consider
the resilience of these dynamics to a small number of Byzantine players about
which no assumptions are made. We show
again a sharp contrast between different games.
In certain cases (e.g., fair cost-sharing, set-covering, job-scheduling)
even a single Byzantine player can cause best-response dynamics to transition
to states of substantially higher cost, whereas in others (e.g., the class of
$\beta$-nice games which includes routing, market-sharing and many others)
these dynamics are much more resilient.
Network Bargaining: Algorithms and Structural Results
Tanmoy Chakraborty,
University of Pennsylvania
Michael Kearns,
University of Pennsylvania
Sanjeev Khanna,
University of Pennsylvania
We consider the following setting: Players are represented by vertices in
a social network, and edges represent bilateral agreements/deals between them.
Each deal yields some fixed wealth if its two endpoints can agree on how to
share the profit, or else it yields zero profit. Many business transactions can
be modeled as such a deal, but the most natural interpretation is a business
partnership. Chakraborty and Kearns (WINE 2008) introduced a simple axiomatic
model that stipulates an equilibrium concept, where all players are rationally
satisfied with their shares. We explore this equilibrium concept in this
paper. We describe an FPTAS to compute
approximate equilibrium on all bipartite
graphs. We show that equilibrium is not unique, but also give some conditions
that ensure uniqueness on regular graphs. Finally, we explore the effect of
network structure on solutions given by our model, using simulation methods and
statistical analysis.
Simple versus Optimal Mechanisms
Jason Hartline,
Northwestern U.
Tim Roughgarden,
Stanford U.
The monopolist’s theory of optimal single-item auctions for agents with
independent private values can be
summarized by two statements. The ?rst is from Myerson (1981): the optimal
auction is Vickrey with a reserve price. The second is from Bulow and Klemperer
(1996): it is better to recruit one more agent and run the Vickrey auction than
to run the optimal auction. These results hold for single-item auctions only
under the assumption that the agents’ valuations are independently and
identically drawn from a distribution that satis?es a natural (and common) regularity condition. These fundamental guarantees for the Vickrey
auction fail to hold in general single-parameter agent mechanism design problems. We give
precise (and weak) conditions under which approximate analogs of these two
results hold, thereby demonstrating that simple mechanisms remain almost
optimal in quite general single-parameter agent settings.
An Optimal Lower Bound for Anonymous Scheduling
Mechanisms
Itai Ashlagi,
Harvard
Shahar Dobzinski,
Hebrew University
Ron Lavi,
Technion
We consider the problem of designing truthful mechanisms to minimize the makespan
on $m$ unrelated machines. In their seminal paper, Nisan and Ronen~\cite{NR99}
showed a lower bound of $2$, and an upper bound of $m$, thus leaving a large
gap. They conjectured that their upper bound is tight, but were unable to prove
it. Despite many attempts that yield positive results for several special
cases, the conjecture is far from being solved: the lower bound was only
recently slightly increased to 2.61~\cite{CKV07,KV07}, while the best upper
bound remained unchanged. In this paper
we show the optimal lower bound on truthful anonymous mechanisms: no such
mechanism can guarantee an approximation ratio better than $m$. This is the
first concrete evidence to the correctness of the Nisan-Ronen conjecture,
especially given that the classic scheduling algorithms are anonymous, and all
state-of-the-art mechanisms for special cases of the problem are anonymous as
well.
The Price of Anarchy in Bertrand Games
Shuchi Chawla,
Univ of Wisconsin Madison
Feng Niu,
Univ of Wisconsin Madison
The Internet is composed of multiple economically-independent service
providers that sell bandwidth in their networks so as to maximize their own
revenue. Users, on the other hand, route their traffic selfishly to maximize
their own utility. How does this selfishness impact the efficiency of operation
of the network? To answer this question we consider a two-stage network pricing
game where service providers first select prices to charge on their links, and
users then pick paths to route their traffic. We give tight bounds on the price
of anarchy of the game with respect to social value—the total value obtained by
all the traffic routed. Unlike recent work on network pricing, in our pricing
game users do not face congestion costs; instead service providers must ensure
that capacity constraints on their links are satis?ed. Our model extends the
classic Bertrand game in economics to network settings.
Bayes-Nash Equilibria of the Generalized Second Price
Renato Gomes,
Northwestern University
Kane Sweeney,
Northwestern University
In this paper we develop a complete Bayes-Nash analysis of the Generalized
Second Price (GSP) auction, by far the most widely used mechanism to sell
sponsored search positions. Interestingly, our results are in sharp contrast
with the previous literature that adopts a complete information framework. We
start by studying the efficient Bayes-Nash equilibrium (simply
"equilibrium", from now on) of the GSP. After deriving its symmetric
bidding strategy, we show that an efficient equilibrium exists if and only if
the click-through rates of any two slots are sufficiently further apart. But
then, if an efficient equilibrium fails to exist, what should we expect from
the GSP? Are there other (possibly) inefficient equilibria? We provide a
negative answer to this question; and show that this auction possesses no
inefficient pure strategy equilibria. We also study the revenue properties of
the GSP when an efficient equilibrium does exist. We prove the
counter-intuitive result that revenue may decrease as click-through rates of
the second to last slots increase provided the distribution of advertisers'
values per click is concentrated on high values. Fortunately, setting the
appropriate reserve prices may improve the performance of this mechanism.
Indeed, we prove that revenue-maximizing slot-specific reserve prices are
uniform across slots and equal their single-unit counterpart (being invariant
to click-through rates). Further, with optimal reserve prices, we show that the
search engine's revenue increases with the click-through rates of all slots
(reversing our previous result on the GSP without reserve prices).
Sybilproof Trust Exchange Protocols
Paul Resnick,
University of Michigan
Rahul Sami,
University of Michigan
We study protocols to enable an agent (the executor) to make potentially
profitable but risky interactions with another agent (the counterparty) in the
absence of a common currency or direct
trust relationship between the two parties. In such situations, it is possible
to enable the investment indirectly through a chain of credit or ``trust''
links. We introduce a novel model of indirect trust exchange protocols that
provides insight into many disparate applications, including open currency
systems, network trust aggregation systems, and manipulation-resistant
recommender systems. In each case, direct interaction between executors and
counterparties can lead to an increase in the level of the community's trust
capital over time. Allowing indirect trust exchanges opens up more investment
opportunities, but also expands the strategy space of an attacker seeking to
exploit the community for her own ends. We show that with indirect exchange
protocols, some friction is unavoidable: any trust exchange protocol that
satisfies a natural strategic safety property that we term sum-sybilproofness
can sometimes lead to a reduction in expected overall trust capital even when
all interactions are profitable in expectation. Thus, in some circumstances,
there will be limits on the percentage of transactions that can be facilitated
through indirect rather than direct trust. We present indirect trust exchange
protocols that achieve the optimal rate of expected growth in capital, among
all protocols satisfying the sum-sybilproofness condition.
Collective Revelation: A Mechanism for Self-Verified,
Weighted, and Truthful Predictions
Sharad Goel,
Yahoo! Research
Daniel Reeves,
Yahoo! Research
David Pennock,
Yahoo! Research
Decision makers often benefit from the subjective judgment of experts. For
example, estimates of disease prevalence are quite valuable, yet can be
difficult to measure objectively. Any mechanism for eliciting and aggregating
expert assessments ideally should: (1) incentivize participants to be truthful;
(2) adjust for the fact that some experts are better informed than others; and
(3) circumvent the need for objective, "ground truth" observations.
Subsets of these properties are attainable by previous elicitation methods,
including proper scoring rules, prediction markets, and the Bayesian truth serum.
Our mechanism of collective revelation, however, is the first to simultaneously
achieve all three. Furthermore, we introduce a general technique for
constructing budget-balanced mechanisms---where no net payments are made to
participants---that applies both to collective revelation and to past
peer-prediction methods.
Social Influence and the Diffusion of User-Created
Content
Eytan Bakshy,
University of Michigan
Brian Karrer,
University of Michigan
Lada Adamic,
University of Michigan
Social influence determines to a large extent what we adopt and when we
adopt it. This is just as true in the digital domain as it is in real life,
especially with the proliferation of user generated content that one must first
become aware of and then select from. In this paper, we present an empirical
study of the diffusion of user-contributed content and directly measure
adoption behavior amongst users in a time-evolving social network. We develop techniques for identifying and
modeling social influence based on the change in adoption rate following the
actions of one's friends. We apply this model to time-resolved social network
and user-to-user content transfer data from Second Life, an immersive massively
multiplayer virtual world. We find
that the social network plays a significant role in the adoption of content.
Adoption rates quicken as the number of friends adopting increases and this
effect varies with the connectivity of a particular user. We further find that sharing among friends
occurs more rapidly than sharing among strangers, but that content that
diffuses primarily through social influence tends to have a more limited
audience. Finally, we examine the role of individuals, finding that some play a
more active role in distributing content than others, but that these
influencers are distinct from the early adopters.
Efficiency of (Revenue-)Optimal Mechanisms
Gagan Aggarwal,
Google, Inc.
Gagan Goel, Georgia Tech
Aranyak Mehta, Google, Inc.
We compare the expected efficiency of revenue-maximizing (or {\em
optimal}) mechanisms with that of efficiency-maximizing ones. We show that the
efficiency of the revenue-maximizing mechanism for selling a single item with
$(k + \log_{\frac{e}{e-1}}{k} + 1)$ bidders is at least as much as the
efficiency of the efficiency-maximizing mechanism with $k$ bidders, when bidder
valuations are drawn i.i.d. from a Monotone Hazard Rate distribution.
Surprisingly, we also show that this bound is tight within a small additive
constant of $4.7$. In other words, $\Theta(\log{k})$ extra bidders suffice for
the revenue-maximizing mechanism to match the efficiency of the
efficiency-maximizing mechanism, while $o(\log{k})$ do not. This is in contrast to the result of Bulow
and Klemperer~\cite{bulow-klemperer} comparing the revenue of the two
mechanisms, where only one extra bidder suffices. More precisely, they show
that the revenue of the efficiency-maximizing mechanism with $k+1$ bidders is
no less than the revenue of the revenue-maximizing mechanism with $k$ bidders. We extend our result for the case of selling
$t$ identical items and show that $\Theta(\log{k}) + t \Theta(\log{\log{k}})$
extra bidders suffice for the revenue-maximizing mechanism to match the
efficiency of the efficiency-maximizing mechanism. In order to prove our results, we do a
classification of Monotone Hazard Rate (MHR) distributions and identify a
family of MHR distributions, such that for each class in our classification,
there is a member of this family that is pointwise lower than every distribution
in that class. This lets us prove interesting structural theorems about
distributions with Monotone Hazard Rate.
Optimal Collusion-Resistant Mechanisms with
Verification
Paolo Penna,
Università di Salerno
Carmine Ventre,
University of Liverpool
We present the first general positive result on the construction of
collusion-resistant mechanisms, that is, mechanisms that guarantee dominant
strategies even when agents can form arbitrary coalitions and exchange
compensations (sometimes referred to as transferable utilities or side
payments). This is a much stronger solution concept as compared to truthful or
even group strategyproof mechanisms, and only impossibility results were known
for this type of mechanisms in the "classical" model. We describe collusion-resistant mechanisms
with verification that return optimal solutions for a wide class of mechanism
design problems (which includes utilitarian ones as a special case). Note that
every collusion-resistant mechanism without verification must have an unbounded
approximation factor and, in general, optimal solutions cannot be obtained even
if we content ourselves with truthful ("non-collusion-resistant")
mechanisms. All these results apply to problems that have been extensively
studied in the algorithmic mechanism design literature like, for instance, task
scheduling and inter-domain routing.
On Representing Coalitional Games with Externalities
Tomasz Michalak,
University of Liverpool
Talal Rahwan,
University of Southampton
Jacek Sroka,
University of Warsaw
Adrew Dowell,
University of Liverpool
Micheal Wooldridge,
University of Liverpool
Peter McBurney,
University of Liverpool
Nicholas Jennings,
University of Southampton
We consider the issue of representing coalitional games in multiagent
systems with externalities, i.e. in systems where the performance of one
coalition may be affected by the functioning of another co-existing coalitions.
On top of the conventional partition function game representation (PFG), we
propose a number of new representations based on a new notion of externalities.
In contrast to conventional game theory, our new concept is not related to the
process by which the coalitions are formed, but rather to the effect that each
coalition may have on the entire system and vice versa. We show that the new
representations are fully expressive and, for many classes of games, more
concise than the conventional PFG. Building upon the new representations, we
propose a number of approaches to solve the coalition structure generation
problem in systems with externalities. We show that, if externalities are
characterised by various degrees of regularity, the new representations allow
us to adapt some of the algorithms that were originally designed for domains
with no externalities so that they can be used when externalities are present.
Finally, building upon [16] and [9], we present a unified method to solve the
coalition structure generation problem in any system, with or without
externalities.