The major difficulty of algorithm design has always been identified with the understanding of the combinatorial structure underlying the optimisation problem at hand. However, this might be only one source of complexity for algorithm design. Indeed, the emergence of the Internet as the computing platform has enriched the picture by highlighting the presence of agents that selfishly evaluate the outcome of the computation. As a consequence, nowadays, algorithms have also to work in the presence of these selfish interests, which are often in contrast with the ultimate objective of the computation (e.g., optimality). The field of Algorithmic Mechanism Design (AMD) has as its main scope the realignment of the objective of the designer with those of the selfish agents through the design of so-called truthful mechanisms. Truthfulness guarantees that agents have no interest in misguiding the mechanism towards the computation of "wrong" (e.g., suboptimal) outcomes.
Truthfulness, however, comes with a heavy price tag. Specifically, monetary transfers between designer and agents are needed, for otherwise the only truthful mechanisms are dictatorships as from the celebrated Gibbart-Satterthwaite theorem. On one hand, dictatorships are neither interesting nor useful as the outcome solution might be arbitrarily "bad" for the designer's objectives (e.g., with a poor approximation guarantee of the measure of interest). On the other hand, while money might be reasonable in some applications, little justification can be found, in general, for either the presence of a digital currency or the use of money at all. There are contexts in which money is morally unacceptable (e.g., to support certain political decisions) or even illegal (as, e.g., in organ donations).
This proposal aims at reconsidering the role of money as a 'necessary evil' in AMD. We propose to reconcile computation and incentives without the use of monetary transfers by leveraging restricted domains, approximation, and well-motivated, relaxed notions of truthfulness. In detail, Gibbart-Satterthwaite theorem is proved under the hypothesis that agents have unrestricted domains (i.e., unrestricted ways to misguide the algorithm), yet many computer science applications only have quite well structured domains. What kind of approximation guarantee of the optimum can we achieve if we insist on truthfulness without money when agents have restricted domains? Our third leverage is a novel application of the concept of verification in AMD. Verification, in this context, means that some kind of misbehaviour of the agents can be uncovered (and punished accordingly). This idea mimics what happens in real life where "inspections of job done" are often the norm and allows for a relaxed notion of truthfulness whereby the set of agents' misbehaving strategies does not contain verifiable ones. The concept of verification has been extensively used in AMD with money; we ask here to what extent we can trade money with verification. How good is the approximation guarantee of the solutions output of truthful mechanisms without money that use verification?
We want to advance our knowledge in this (largely) unexplored middle ground between truthfulness, approximation, and money in relation to fundamental optimisation problems, with many applications in real life, such as Combinatorial Auctions and Facility Location, as well as in a more abstract way by relating particular forms of verification to truthfulness, money and other measures of interest. Our study attempts to build the foundations of (i) a more applied use of AMD; and, (ii) the application of game-theoretic models to the non-profit sector.
|