diff --git "a/KdFRT4oBgHgl3EQf0zhG/content/tmp_files/2301.13654v1.pdf.txt" "b/KdFRT4oBgHgl3EQf0zhG/content/tmp_files/2301.13654v1.pdf.txt" new file mode 100644--- /dev/null +++ "b/KdFRT4oBgHgl3EQf0zhG/content/tmp_files/2301.13654v1.pdf.txt" @@ -0,0 +1,2939 @@ +arXiv:2301.13654v1 [cs.GT] 31 Jan 2023 +MULTI-AGENT CONTRACT DESIGN: HOW TO COMMISSION +MULTIPLE AGENTS WITH INDIVIDUAL OUTCOMES +ARXIV PREPRINT +Matteo Castiglioni +Politecnico di Milano +matteo.castiglioni@polimi.it +Alberto Marchesi +Politecnico di Milano +alberto.marchesi@polimi.it +Nicola Gatti +Politecnico di Milano +nicola.gatti@polimi.it +February 1, 2023 +ABSTRACT +We study hidden-action principal-agent problems with multiple agents. These are problems in which +a principal commits to an outcome-dependent payment scheme (called contract) in order to incen- +tivize some agents to take costly, unobservable actions that lead to favorable outcomes. Previous +works on multi-agent problems study models where the principal observes a single outcome deter- +mined by the actions of all the agents. Such models considerably limit the contracting power of the +principal, since payments can only depend on the joint result of all the agents’ actions, and there is +no way of paying each agent for their individual result. In this paper, we consider a model in which +each agent determines their own individual outcome as an effect of their action only, the principal +observes all the individual outcomes separately, and they perceive a reward that jointly depends on +all these outcomes. This considerably enhances the principal’s contracting capabilities, by allowing +them to pay each agent on the basis of their individual result. We analyze the computational com- +plexity of finding principal-optimal contracts, revolving around two newly-introduced properties of +principal’s rewards, which we call IR-supermodularity and DR-submodularity. Intuitively, the for- +mer captures settings with increasing returns, where the rewards grow faster as the agents’ effort +increases, while the latter models the case of diminishing returns, in which rewards grow slower +instead. These two properties naturally model two common real-world phenomena, namely disec- +onomies and economies of scale. In this paper, we first address basic instances in which the principal +knows everything about the agents, and, then, more general Bayesian instances where each agent has +their own private type determining their features, such as action costs and how actions stochastically +determine individual outcomes. As a preliminary result, we show that finding an optimal contract +in a non-Bayesian instance can be reduced in polynomial time to a suitably-defined maximization +problem over a matroid having a particular structure. Such a reduction is needed to prove our main +positive results in the rest of the paper. We start by analyzing non-Bayesian instances with IR- +supermodular rewards, where we prove that the problem of computing a principal-optimal contract +is inapproximable in general, but it becomes polynomial-time solvable under some mild regularity +assumptions. Then, we study non-Bayesian instances with DR-submodular rewards, showing that +the problem is inapproximable also in this setting, but it admits a polynomial-time approximation +algorithm which outputs contracts providing a multiplicative approximation (1 − 1/e) of the prin- +cipal’s reward in an optimal contract, up to a small additive loss. In conclusion, we extend our +positive results to Bayesian instances. First, we provide a characterization of the principal’s opti- +mization problem, by showing that it can be approximately solved by means of a linear formulation. +This is non-trivial, since in general the problem may not admit a maximum, but only a supremum. +Then, based on such a linear formulation, we provide a polynomial-time approximation algorithm +that employs an ad hoc implementation of the ellipsoid method using an approximate separation +oracle. We prove that such an oracle can be implemented in polynomial time by exploiting our posi- +tive results on non-Bayesian instances. Surprisingly, this allows us to (almost) match the guarantees +obtained for non-Bayesian instances. + +ARXIV PREPRINT - FEBRUARY 1, 2023 +1 +Introduction +Over the last few years, principal-agent problems have received a growing attention from the economics and compu- +tation community. These problems model scenarios in which a principal interacts with one or more agents, with the +latter playing actions that induce externalities on the former. We focus on hidden-action problems, where the princi- +pal only observes some stochastically-determined outcome of the actions selected by the agents, but not the actions +themselves. The principal gets a reward associated with the realized outcome, while an agent incurs in a cost when +performing an action. Thus, the principal’s goal is to incentivize agents to undertake actions which result in profitable +outcomes. This is accomplished by committing to a contract, which is a payment scheme defining how much the +principal pays each agent depending on the realized outcome. +The classical textbook example motivating the study of hidden-action principal-agent problems is that of a firm (prin- +cipal) hiring a salesperson (agent) in order to sell some products. The salesperson has to decide on the level of effort +(action) to put in selling products, while the firm only observes the number of products that are actually sold (out- +come). In such a scenario, it is natural that the firm commits to pay a commission to the salesperson by stipulating a +contract with them, and that such a commission only depends on the number of products being sold. +Nowadays, the study of principal-agent problems is also motivated by the fact that they are ubiquitous in sev- +eral real-world settings, such as, e.g., crowdsourcing platforms (Ho et al., 2016), blockchain-based smart con- +tracts (Cong and He, 2019), and healthcare (Bastani et al., 2016). +The computational aspects of principal-agent problems with a single agent have been widely investigated in the +literature. Instead, only few works study problems with multiple agents. Some notable examples are the papers +by Babaioff et al. (2006) and Emek and Feldman (2012), and the very recent preprint by Duetting et al. (2022). These +works address models where the principal observes a single outcome determined by the actions of all the agents. Such +models considerably limit the contracting power of the principal, since payments can only depend on the joint result +of all the agents’ actions, and there is no way of paying each agent for their individual result. +In this paper, we introduce and study principal-agent problems with multiple agents—compactly referred to as +principal-multi-agent problems—in which each agent determines their own individual outcome as an effect of their +action only, the principal observes all the individual outcomes separately, and they perceive a reward that jointly de- +pends on all these outcomes. Our model fits many real-world applications. For instance, in settings where a firm wants +to hire multiple salespersons, it is natural that the firm can observe the number of products being sold by each of them +individually. Additionally, as we show in this paper, our model also allows to circumvent the equilibrium-selection +issues raised by the problems studied in (Babaioff et al., 2006; Emek and Feldman, 2012; Duetting et al., 2022). In- +deed, as we discuss later in Section 1.2, such issues originate from the appearance of externalities among the agents, +which are instead not present in our setting. +1.1 +Original Contributions +We investigate the computational complexity of finding optimal contracts in our principal-multi-agent problems with +agents’ individual outcomes. Our analysis revolves around two properties of principal’s rewards, which we call IR- +supermodularity and DR-submodularity. Intuitively, the former captures settings with increasing returns, where the +rewards grow faster as the agents’ effort increases, while the latter models the case of diminishing returns, in which +rewards grow slower as the effort increases. These two properties naturally model two common real-world phenomena, +namely diseconomies and economies of scale, respectively. +In the first sections of the paper (namely Sections 2, 3, 4, and 5), we study basic principal-multi-agent problems +in which the principal knows everything about agents, i.e., their action costs and the probability distributions that +their actions induce over (individual) outcomes. Then, in Section 6, we switch the attention to the far more general +Bayesian settings in which each agent’s action costs and probability distributions depend on a private agent’s type, +which is unknown to the principal, but randomly drawn according to a commonly-known probability distribution. +After introducing, in Section 2, all the preliminary concepts related to the non-Bayesian version of our principal-multi- +agent problems, in Section 3 we provide a useful preliminary result. We show that the problem of computing an optimal +contract in a non-Bayesian instance can be reduced in polynomial time to the maximization of a suitably-defined set +function over a matroid having a particular structure. Specifically, we call the matroids introduced by our reduction +1-partition matroids, since their ground sets are partitioned into some classes and their independent sets are all the +subsets which contain at most one element for each class. At the end of the section (more precisely in Section 3.3), we +also provide an additional preliminary result, by showing that there exists a polynomial-time algorithm for maximizing +particular set functions, which we call ordered-supermodular functions, over 1-partition matroids. This will be useful +to derive our positive result in the following Section 4, and it may also be of independent interest. +2 + +ARXIV PREPRINT - FEBRUARY 1, 2023 +In Section 4, we provide our main results on non-Bayesian instances with IR-supermodular principal’s rewards. We +start with a negative result: for any ρ > 0, it is NP-hard to design a contract providing a multiplicative approximation ρ +of the principal’s expected utility in an optimal contract, even when both the number of agents’ actions and the number +of outcomes are fixed. Then, we show how to circumvent such a negative result by introducing a mild regularity +assumption. Specifically, we prove that, in instances with IR-supermodular principal’s rewards that additionally satisfy +a particular first-order stochastic dominance (FOSD) condition, an (exact) optimal contract can be found in polynomial +time. This is accomplished by exploiting the reduction introduced in Section 3, and by proving that, for such instances, +the resulting set function is ordered-supermodular. +In Section 5, we switch our attention to non-Bayesian instances with DR-submodular principal’s rewards. Similarly +to the preceding section, we start with a negative result: for any α > 0, it is NP-hard to design a contract providing a +multiplicative approximation n1−α—with n being the number of agents—of the principal’s expected utility in an op- +timal contract, even when both the number of agents’ actions and the dimensionality of the outcomes are fixed. Next, +we complement such a negative result by providing a polynomial-time approximation algorithm for the problem. In +particular, we exploit the reduction to matroid optimization introduced in Section 3 and a result by Sviridenko et al. +(2017) in order to design an algorithm that, with high probability, outputs a contract providing a multiplicative approx- +imation (1−1/e) of the principal’s reward in an optimal contract, up to a small additive loss ǫ > 0, in time polynomial +in the instance size and 1/ǫ. +Finally, we conclude the paper, in Section 6, by providing our results on Bayesian principal-multi-agent problems. +First, we extend the model recently introduced by Castiglioni et al. (2022b) to our multi-agent setting. The key feature +of such a model is that, by taking inspiration from classical mechanism design, it adds a type-reporting stage in which +each agent is asked to report their type to the principal. In such a setting, the principal is better off committing to a +menu of randomized contracts rather than a single contract. This specifies a collection of probability distributions over +(non-randomized) contracts, where each distribution is employed to draw a contract upon a different combination of +types reported by the agents. Surprisingly, we show that it is possible to implement a polynomial-time approximation +algorithm for the problem of computing an optimal menu of randomized contracts, whose guarantees (almost) match +those obtained for non-Bayesian instances. In order to obtain the result, we first provide a characterization of the +principal’s optimization problem, by showing that it can be approximately solved by means of a linear program (LP) +with polynomially-many variables and exponentially-many constraints. Notice that this step is non-trivial, since in +general the principal’s optimization problem may not admit a maximum, but only a supremum. Our algorithm is +based on an ad hoc implementation of the ellipsoid method, which approximately solves such an LP, provided that it +has access to a suitably-defined, polynomial-time approximate separation oracle. Such an oracle can be implemented +by using the algorithms developed in Sections 4 and 5 for non-Bayesian instances. +1.2 +Related Works +Next, we survey the most-related computational works on hidden-action principal-agent problems. +Works on Principal-Agent Problems with a Single Agent. +Most of these works focus on non-Bayesian settings. +Among the most related to ours, Dutting et al. (2021) and D¨utting et al. (2022) study models whose underlying struc- +ture is combinatorial. In particular, the latter analyze the case in which the outcome space is defined implicitly through +a succinct representation, while the former address settings in which the agent selects a subset of actions (rather than +a single one). Moreover, Babaioff and Winter (2014) study the complexity of contracts in terms of the number of +different payments that they specify, while D¨utting et al. (2019) use the computational lens to analyze the efficiency +(in terms of principal’s expected utility) of linear contracts with respect to optimal ones. Recently, some works also +considered the more realistic Bayesian settings (Guruganesh et al., 2021; Alon et al., 2021; Castiglioni et al., 2022a,c). +In particular, Castiglioni et al. (2022c) introduce the idea of menus of randomized contracts, showing that in Bayesian +settings they enjoy much nicer computational properties than menus of deterministic (i.e., non-randomized) contracts, +which were previously studied in (Guruganesh et al., 2021; Alon et al., 2021). +Works on Principal-Agent Problems with Multiple Agents. +All the previous works on multi-agent settings are +limited to non-Bayesian instances. Babaioff et al. (2006) are the first to study a model with multiple agents (see also +its extended version (Babaioff et al., 2012) and its follow-ups (Babaioff et al., 2009, 2010)). They study a setting in +which agents have binary actions, called effort and no effort, and the outcome is determined according to a proba- +bility distribution that depends on the set of agents that decide to undertake effort. This model induces externalities +among the agents, since the realized outcome (and, in turn, the agents’ payments) depends on the actions taken by +all the agents. Babaioff et al. (2006) show that finding an optimal contract is #P-complete even when the outcome- +determining function is represented as a “simple” read-once network. Emek and Feldman (2012) extend the work +by Babaioff et al. (2006) by showing that the problem is NP-hard even for a special class of submodular functions, +3 + +ARXIV PREPRINT - FEBRUARY 1, 2023 +while admitting an FPTAS. Finally, a very recent preprint by Duetting et al. (2022) considerably extends previous +works by providing constant-factor approximation algorithms for problems with submodular and XOS rewards. +2 +The Principal–Multi–Agent Problem +An instance of the principal-multi-agent problem is characterized by a tuple (N, Ω, A),1 where: N is a finite set of +n := |N| agents; Ω is a finite set of m := |Ω| possible (individual) outcomes of an agent’s action, and A is a finite set +of ℓ := |A| actions available to each agent.2 +For each agent i ∈ N, we introduce Fi,a ∈ ∆Ω to denote the probability distribution over outcomes Ω induced +by action a ∈ A of agent i,3 while ci,a ∈ [0, 1] denotes the agent’s cost for playing such action.4 For ease of +presentation, we let Fi,a,ω be the probability that Fi,a assigns to ω ∈ Ω, so that it holds � +ω∈Ω Fi,a,ω = 1. We define +a ∈ An :=×i∈N A as a tuple of agents’ actions, whose i-th component is denoted by ai and represents the action +played by agent i. Moreover, we let ω ∈ Ωn :=×i∈N Ω be a tuple of outcomes, whose i-th component ωi is the +individual outcome achieved by agent i. Each tuple ω ∈ Ωn has an associated reward to the principal, which we +denote by rω ∈ [0, 1]. As a result, whenever the agents play the actions defined by a tuple a ∈ An, the principal +achieves an expected reward equal to Ra := � +ω∈Ωn rω +� +i∈N Fi,ai,ωi. +Notice that, in our model, the principal observes all the elements in the tuple of outcomes ω ∈ Ωn reached by the +agents, which consists in an individual outcome ωi for each agent i ∈ N. This is in contrast with previous works +on principal-multi-agent problems (see, e.g., (Babaioff et al., 2006; Emek and Feldman, 2012; Duetting et al., 2022)), +which assume that the principal can only observe a single outcome that is jointly determined by the tuple of all the +agents’ actions. +2.1 +Contracts and Principal’s Optimization Problem +In a principal-multi-agent problem, the goal of the principal is to maximize their expected utility by committing to +a contract, which specifies payments from the principal to each agent contingently on the actual individual outcome +achieved by the agent. Formally, a contract is defined by a matrix p ∈ Rn×m ++ +, whose entries pi,ω ≥ 0 define a payment +for each agent i ∈ N and outcome ω ∈ Ω.5 Notice that the assumption that payments are non-negative (i.e., they +can only be from the principal to agents) is common in contract theory, where it is known as limited liability (Carroll, +2015). When agent i ∈ N selects an action a ∈ A under a contract p ∈ Rn×m ++ +, the expected payment from the +principal to agent i is Pi,a := � +ω∈Ω Fi,a,ω pi,ω, while the agent’s expected utility is Pi,a − ci,a. +Given a contract p ∈ Rn×m ++ +, each agent i ∈ N selects an action such that: +1. it is incentive compatible (IC), i.e., it maximizes their expected utility among actions in A; +2. it is individually rational (IR), i.e., it has non-negative expected utility (if there is no IR action, then agent i +abstains from playing so as to maintain the status quo). +For ease of presentation, we make the following w.l.o.g. assumption: +Assumption 1 (Null action). There exists an action a∅ ∈ A such that ci,a∅ = 0 for all i ∈ N. +Such an assumption implies that each agent has an action providing them with a non-negative utility, thus ensuring +that any IC action is also IR and allowing us to focus w.l.o.g. on incentive compatibility only. In the following, +given a contract p ∈ Rn×m ++ +, we denote by A∗ +i (p) ⊆ A the set of actions that are IC for agent i ∈ N under that +contract. Formally, it holds A∗ +i (p) := arg maxa∈A {Pi,a − ci,a}. Furthermore, given an action a ∈ A of agent +1For ease of notation, in this paper we assume that all the numerical quantities that define a principal-multi-agent problem +instance, such as costs, rewards, and probabilities, are attached to their corresponding elements in the sets N, Ω, and A, so that we +can simply write I := (N, Ω, A) to identify an instance of the problem. +2For ease of presentation, we assume that all the agents share the same action set and outcome set. Our results continue to hold +even if each agent i ∈ N has their own action set Ai and their actions induce outcomes in an agent-specific set Ωi. +3In this paper, given a finite set X, we denote by ∆X the set of all the probability distributions defined over elements of X. +4For ease of presentation, costs and rewards are in [0, 1]. All the results can be easily generalized to an arbitrary range. +5W.l.o.g., we can restrict the attention to contracts that define payments independently for each agent, rather than dealing with +contracts which specify payments based on the tuple of outcomes resulting from the actions of all agents. This is because each +agent i ∈ N induces a specific outcome ωi with their action (independently of what the others do), and such outcome is observed +by the principal. As a consequence, we also have that, in our setting, there are no externalities among the agents, since an agent’s +expected utility does not depend on the actions played by other agents. +4 + +ARXIV PREPRINT - FEBRUARY 1, 2023 +i ∈ N, we let Pi,a ⊆ Rn×m ++ +be the set of contracts such that action a is IC for agent i under them; formally, +Pi,a := +� +p ∈ Rn×m ++ +| a ∈ A∗ +i (p) +� +. +Given a contract p ∈ Rn×m ++ +, the resulting set A∗ +i (p) of IC actions for an agent i ∈ N may contain more than one +action. Thus, it is necessary to adopt a suitable tie-breaking-rule assumption. +Remark 1 (On classical tie-breaking rules). Most of the works on principal-agent problems usually assume that, when- +ever an agent is indifferent among multiple IC actions, they break ties in favor of the principal (see, e.g., (Dutting et al., +2021)). Such an assumption is unreasonable in our setting. Indeed, as we show in Corollary 2, the problem of com- +puting a utility-maximizing tuple of agents’ actions that are IC under a given contract p ∈ Rn×m ++ +is NP-hard. +We circumvent the issue of classical tie-breaking rules highlighted in Remark 1 by slightly abusing terminology and +extending the notion of contract to also include action recommendations for the agents. Formally, we identify a +contract with a pair (p, a∗), where p ∈ Rn×m ++ +defines the payments and a∗ ∈×i∈N A∗ +i (p) specifies a tuple of agents’ +actions, which should be interpreted as action recommendations suggested by the principal to the agents. Given that +the actions in a∗ are IC under p, we assume w.l.o.g. that the agents stick to such recommendations. +In conclusion, the principal’s optimization problem reads as follows: +Definition 1 (Principal’s Optimization Problem). Given an instance (N, Ω, A) of principal-multi-agent problem, com- +pute an optimal contract (p, a∗)—with p ∈ Rn×m and a∗ ∈ ×i∈N A∗ +i (p)—, which is defined as a pair (p, a∗) +maximizing the principal’s expected utility: +Ra∗ − +� +i∈N +Pi,a∗ +i = +� +ω∈Ωn +rω +� +i∈N +Fi,ai,ωi − +� +i∈N +� +ω∈Ω +Fi,a,ω pi,ω. +2.2 +On the Representation of Principal’s Rewards +Representing principal’s rewards in a principal-multi-agent problem becomes unfeasible when there are many agents, +since the number of possible tuples of outcomes grows as mn. Thus, we work with a succinct representation of +principal’s rewards, which we formally introduce in the following. We remark that, with arbitrary rewards, an optimal +contract can be found in time polynomial in the instance size (i.e., in time depending polynomially on mn), as we +show in Section 3.2. +We say that a principal-multi-agent problem instance (N, A, Ω) has succinct rewards if: +1. outcomes can be represented as non-negative q-dimensional vectors with q ∈ N>0 representing the dimen- +sionality of the outcome space, namely Ω is a finite subset of Rq ++; +2. the principal’s rewards can be expressed by means of a reward function g : Rnq ++ → R such that rω = g(ω) +holds for every tuple of outcomes ω ∈ Ωn, and, thus, we can also write Ra = � +ω∈Ωn g(ω) � +i∈N Fi,ai,ωi +for every a ∈ An.6 +Let us remark that, for ease of presentation and overloading notation, we denote tuples of outcomes as vectors, namely +ω ∈ Ωn ⊆ Rnq ++ , where we let ωi,j be the j-th component of the vector that identifies the outcome achieved by agent +i, for all i ∈ N and j ∈ [q].7 Moreover, in the following, we assume that g : Rnq ++ → R can be accessed through an +oracle that, given ω ∈ Ωn, outputs g(ω).8 +In this work, we make the following common assumption on principal’s rewards: +Assumption 2 (Increasing rewards). The principal’s reward function g : Rnq ++ → R is increasing; formally, it holds +that g(ω) ≥ g(ω′) for all ω, ω′ ∈ Rnq ++ : ω ≥ ω′. +Moreover, we will focus on two particular classes of reward functions, which, as we show next, enjoy some useful +properties and are met in many real-world settings. +6Notice that, since we assume that rω ∈ [0, 1] for all ω ∈ Ωn, while the function g is allowed to take any real value over its +domain Rnq ++ , it has to hold that g(ω) ∈ [0, 1] for all ω ∈ Ωn. +7In this paper, given a positive natural number x ∈ N>0, we let [x] := {1, . . . , x} be the set of the first x natural numbers. +8In this paper, for ease of exposition, we assume that the value of Ra for any given a ∈ An can be computed in polynomial +time, without enumerating tuples of outcomes. The value of Ra can be approximated up to any arbitrarily small error with high +probability by sampling each ωi independently from Fi,a, and evaluating g(ω). All the results in the paper can be easily extended +to also account for this additional (arbitrarily small) approximation. +5 + +ARXIV PREPRINT - FEBRUARY 1, 2023 +Definition 2 (DR-submodularity and IR-supermodularity). A reward function g : Rnq ++ → R is diminishing-return +submodular (DR-submodular) if, for all ω, ω′, ω′′ ∈ Rnq ++ : ω ≤ ω′, it holds +g(ω + ω′′) − g(ω) ≥ g(ω′ + ω′′) − g(ω′). +Moreover, a reward function g : Rnq ++ → R is increasing-return supermodular (IR-supermodular)if its opposite function +−g is DR-submodular. +Let us remark that, when the reward function g is continuously differentiable, then the property that characterizes +DR-submodular functions has a more intuitive interpretation. Indeed, as shown by Bian et al. (2017), when g is +continuously differentiable, g is DR-submodular if and only if: +∇g(ω) ≥ ∇g(ω′) +∀ω, ω′ ∈ Rnq ++ : ω ≥ ω′. +Intuitively, this means that, if a tuple of outcomes ω′ dominates component-wise another tuple ω, then in ω′ the reward +function grows slower than in ω along all of its components. This property is satisfied in many real-world scenarios, +as we show in the following specific example. +Example 1 (Selling multiple products). Consider a principal-agent problem modeling the interaction between a firm +and a salesperson (the example can be easily generalized to the case of multiple salespersons). The firm wants to sell +q ∈ N>0 different products, and the salesperson can sell a variable quantity of each product, depending on the level of +effort put in selling each of them. Thus, the outcome achieved by the salesperson can be encoded by a vector ω ∈ R1q ++ +whose j-th component ω1,j represents the quantity of product j being sold. In such a setting, a DR-submodular reward +function g models scenarios in which the firm is subject to diseconomies of scale, and, thus, the marginal return of +each unit of product sold decreases as the quantity sold increases. This may be due to the fact that, e.g., the firm has to +sustain much higher operational costs in order increase its selling capacity. On the other hand, an IR-supermodular +reward function g models cases in which there are economies of scale, and, thus, the marginal return of each unit of +product sold increases with quantity (since, e.g., the fixed costs are more efficiently covered). +3 +Reducing Principal–Multi–Agent Problems to Matroids +In this section, we show that computing an optimal contract in principal-multi-agent problems can be reduced in +polynomial time to a maximization problem defined over a special class of matroids. First, in Section 3.1, we introduce +some preliminary definitions on matroids and optimization problems over matroids. Then, in Section 3.2, we provide +the reduction. +We conclude the section with Section 3.3, in which we provide a preliminary technical result for the problem of max- +imizing functions defined over 1-partition matroids and satisfying a particular (stronger) notion of supermodularity. +This result will be useful in the following Section 4. +3.1 +Preliminaries on Matroids +A matroid M := (G, I) is defined by a finite ground set G and a collection I of independent sets, which are subsets +of G satisfying some characteristic properties, namely: +1. the empty set is independent, i.e., ∅ ∈ I; +2. every subset of an independent set is independent, i.e., for S′ ⊆ S ⊆ G, if S ∈ I then S′ ∈ I; +3. if S ∈ I and S′ ∈ I are two independent sets such that S has more elements than S′, i.e., |S| > |S′|, then +there exists an element x ∈ S \ S′ such that S′ ∪ {x} ∈ I. +Any subset S ⊆ G such that S /∈ I is said to be dependent. The bases of the matroid M are all the maximal +independent sets of M, where an independent set is said to be maximal if it becomes dependent by adding any +element of G to it. We denote by B(M) ⊂ 2G the set of the bases of M. We refer the reader to (Schrijver et al., 2003) +for a detailed treatment of matroids. +In the following, we will also consider optimization problems defined over matroids. In particular, given a set function +f : 2G → R assigning a value to each subset of the ground set, the associated maximization problem over a matroid +M := (G, I) is defined as maxS∈I f(S). +3.2 +Reduction to Matroid Optimization +In order to provide our reduction, we need to introduce the following class of matroids: +6 + +ARXIV PREPRINT - FEBRUARY 1, 2023 +Definition 3 (1-Partition Matroid). A matroid M := (G, I) is a 1-partition matroid if there exists d ∈ N+ subsets +Gi ⊆ G of ground elements such that: +1. G = � +i∈[d] Gi and Gi ∩ Gj = ∅ for all i, j ∈ [d] : i ̸= j; +2. I = {S ⊆ G : |S ∩ Gi| ≤ 1 ∀i ∈ [d]}. +Intuitively, in a 1-partition matroid, the ground set G is partitioned into d disjoint subsets Gi, and the independent sets +are all and only the subsets of G that contain at most one element of each subset Gi. In the following, we denote by +M := ({Gi}i∈[d] , I) a 1-partition matroid with G := � +i∈[d] Gi, and, for ease of notation, we let ki := |Gi| for every +i ∈ [d]. Notice that, as it is immediate to check, the set B(M) of the bases of a 1-partition matroid M := ({Gi}i∈[d] , I) +is made by all the subsets of G containing exactly one element for each subset Gi. +Next, we show how the problem of computing an optimal contract in principal-multi-agent problems can be reduced in +polynomial time to a maximization problem defined over a suitably-constructed 1-partition matroid, which is formally +defined as follows: +Definition 4 (Mapping from principal-multi-agent problems to 1-partition matroids). Given an instance of principal- +multi-agent problem, say I := (N, Ω, A), we define its corresponding 1-partition matroid MI := ( +� +GI +i +� +i∈N , II) as +follows: +1. GI +i := {(i, a) : a ∈ A} for all i ∈ N, with GI := � +i∈[d] GI +i ; +2. II := +� +S ⊆ GI : |S ∩ GI +i | ≤ 1 ∀i ∈ [d] +� +. +It is immediate to check that MI is indeed a 1-partition matroid. Moreover, its bases correspond one-to-one to +agents’ action profiles a ∈ An. In particular, an independent set S ∈ II of MI assigns an action to each agent +in NS := +� +i ∈ N : |S ∩ GI +i | = 1 +� +; we denote by aS,i ∈ A the action associated to agent i ∈ NS. Since a base of +a 1-partition matroid is any independent set S ∈ II containing one element for each Gi, it completely specifies an +agents’ action profile, which we denote by aS = (aS,i)i∈N. For ease of presentation, in the following we overload +notation and write aS = (aS,i)i∈N also for independent sets S ∈ II that are not bases, by letting all the unspecified +actions be equal to the null one; formally, aS,i = a∅ for all i ∈ N \ NS. +The following theorem formalizes our reduction: +Theorem 1. Given an instance I := (N, Ω, A) of principal-multi-agent problem, the problem of computing a contract +maximizing the principal’s expected utility can be reduced in polynomial time to solving maxS∈II f I(S) over the +1-partition matroid MI = ( +� +GI +i +� +i∈N , II), where f I : 2GI → R is a set function such that, for every independent set +S ∈ II, it holds: +f I(S) := RaS − +� +i∈N +�Pi,aS,i, +where +�Pi,aS,i = +min +p∈Pi,aS,i +� +ω∈Ω +Fi,aS,i,ω pi,ω. +Intuitively, f I(S) is equal to the maximum possible expected utility that the principal can get by means of contracts +under which the actions in aS are IC and are those recommended by the principal to the agents. The proof of Theorem 1 +relies on the following useful lemma, which shows that the optimal value of f I is always attained at a base of MI. +Lemma 1. There always exists a base S∗ ∈ B(MI) of MI such that f I(S∗) = maxS∈II f I(S). +Let us also remark that Lemma 1 and Theorem 1 immediately provide a polynomial-time algorithm for finding an +optimal contract in principal-multi-agent instances without succinct rewards. Indeed, since the optimal value of f I is +always attained at least one base of the matroid MI (Lemma 1), in order to find an optimal contract it is sufficient to +enumerate all the bases of MI, which are mn. Without a succinct reward representation, the size of an instance of +principal-multi-agent problem grows as mn (there is a reward value for each tuple of agents’ outcomes), and, thus, the +enumerative algorithm runs in time polynomial in the instance size. +3.3 +Preliminary Technical Results on 1-Partition Matroids +We first introduce a particular class of set functions defined over 1-partition matroids, which we call ordered- +supermodular functions. In order to do this, we first need some additional notation. Given a 1-partition matroid +M := ({Gi}i∈[d] , I), for each i ∈ [d] we introduce a bijective function πi : [ki] → Gi to denote an ordering of the +subset Gi in which the elements are ordered from πi(1) to πi(ki). Given two independent sets S, S′ ∈ I of the matroid, +7 + +ARXIV PREPRINT - FEBRUARY 1, 2023 +we denote by S ∧ S′ the partition-wise “maximum” of the two sets, i.e., the set made by an element x ∈ (S ∪ S′) ∩ Gi +with maximal value of π−1 +i +(x) for each partition i ∈ [d] (notice that (S ∪ S′) ∩ Gi contains at most one element of Gi +for each of the two sets S and S′). Analogously, we define S ∨ S′ as the partition-wise “minimum” of the two sets. +Then, a set function is said ordered-supermodular if there exist some orderings of the sets Gi such that the function +satisfies the classical condition of supermodularity over the independent sets of the matroid, with the usual union and +intersection operators replaced by the partition-wise “maximum” ∧ and “minimum” ∨, respectively. Formally: +Definition 5 (Ordered-supermodular function). A set function f : 2G → R defined over a 1-partition matroid M := +({Gi}i∈[d] , I) is said to be ordered-supermodular if there exist bijective functions πi : [ki] → Gi for i ∈ [d] such that, +for every pair of independent sets S′, S ∈ I: +f(S ∧ S′) + f(S ∨ S′) ≥ f(S) + f(S′). +Notice that, if one restricts the attention to independent sets S′, S ∈ I such that S ∪ S′ ∈ I, then the condition for +ordered-supermodularity coincides with that for supermodularity. Thus, intuitively, the former can be seen as a way +of tightening the latter in order to also account for cases in which the union of independent sets is not independent. +Finally, we show that the characteristic feature of ordered-supermodular functions allows us to reduce their optimiza- +tion to solving maximization problems of supermodular functions defined over rings of sets, which can be done in +polynomial time (Schrijver, 2000; Bach, 2019).9 +Theorem 2. The problem of maximizing an ordered-supermodular function over a 1-partition matroid can be reduced +in polynomial time to maximizing a supermodular function over a ring of sets. +Corollary 1. The problem of maximizing an ordered-supermodular function over a 1-partition matroid admits a +polynomial-time algorithm. +4 +Principal-Multi-Agent Problems with IR-supermodular Rewards +In this section, we study principal-multi-agentproblems with succinct rewards specified by IR-supermodularfunctions. +First, in Section 4.1, we prove that in such setting the problem of computing an optimal contract is inapproximable in +polynomial time. Then, in Section 4.2 we show that, under mild assumptions, the problem can be solved in polynomial +time. +4.1 +Inapproximability Result +In order to prove the negative result, we provide a reduction from the LABEL-COVER problem, which consists in +assigning labels to the vertexes of a bipartite graph in order to satisfy some given constraints that define which pairs of +labels can be assigned to vertexes connected by an edge. In particular, we consider the promise version of the problem, +in which, given an instance such that either there exists an assignment of labels satisfying at least a fraction c of the +constraints or all the possible assignments satisfy less than a fraction s of them (with s ≤ c), one has to establish +which one of the two cases indeed holds. Such a problem is known to be NP-hard (Raz, 1998; Arora et al., 1998). We +refer the reader to Appendix B for a formal definition of the problem. +Our inapproximability result formally reads as follows: +Theorem 3. For any constant ρ > 0, in principal-multi-agent problems with succinct rewards specified by an IR- +supermodular function, it is NP-hard to design a contract providing a ρ-approximation of the principal’s expected +utility in an optimal contract, even when both the number of outcomes m and the number of agents’ actions ℓ are +fixed. +Indeed, the proof of Theorem 3 provides an even stronger hardness result. It also shows that it is NP-hard to find a +tuple of agents’ actions a ∈ An that is “approximately” optimal for the principal under a given contract p ∈ Rn×m ++ +. +Formally, the following corollary holds: +Corollary 2. For any constant ρ > 0, in principal-multi-agent problems with succinct rewards specified by an IR- +supermodular function, it is NP-hard to compute a ρ-approximate solution to the problem of finding the best (for the +principal) tuple of IC agents’ actions a ∈×i∈N A∗ +i (p) for a given contract p ∈ Rn×m ++ +, even when both the number +of outcomes m and that of agents’ actions ℓ are fixed. +Corollary 2 is readily proved by noticing that the proof of Theorem 3 continues to hold even if we restrict it to the null +contract in which all the payments are zero. +9We recall that a ring of sets is a family of sets R that is closed under both union and intersection. Formally, given any two sets +S, S′ ∈ R, it holds S ∪ S′ ∈ R and S ∩ S′ ∈ R (Birkhoff, 1937). +8 + +ARXIV PREPRINT - FEBRUARY 1, 2023 +4.2 +A Polynomial-time Algorithm for Instances Satisfying the FOSD Condition +In the following, we show how to circumvent the negative result established by Theorem 3. In particular, we prove +that, in principal-multi-agent problems with succinct rewards specified by an IR-supermodular function, under some +mild additional assumptions the problem of computing an optimal contract can indeed be solved in polynomial time. +We consider instances satisfying a particular first-order stochastic dominance (FOSD) condition, which is similar to +several properties that are commonly studied in the contract theory literature (see, e.g., (Tadelis and Segal, 2005)). +Moreover, such a condition is reasonably satisfied in many real-world settings. Intuitively, it states that the higher +the cost of an agent’s action, the bigger the probability with which such an action induces “good” outcomes. For +instance, in salesperson problems with multiple products (Example 1), such a condition is always satisfied, since +outcome vectors represent the quantity of each product being sold and action costs encode the effort levels undertaken +by the agents. Naturally, the salesperson undertaking an higher level of effort in selling products will result in a bigger +probability of generating large volumes of sales. +In order to formally define the FOSD condition, we first need to introduce some additional notation. Given a subset of +outcomes Ω′ ⊆ Ω, we say that Ω′ is comprehensive whenever, for every ω ∈ Ω′ and ω′ ∈ Ω, if ω′ ≤ ω then ω′ ∈ Ω′. +Moreover, for ease of presentation, with a slight abuse of notation and w.l.o.g. we assume that the actions of each +agent i ∈ N are re-labeled so that A = {a1, . . . , aℓ} with ci,aj ≤ ci,aj+1 for every j ∈ [ℓ − 1]. Then, we have the +following definition: +Definition 6 (First order stochastic dominance). An instance of principal-multi-agent problem is said to satisfy the +first-order stochastic dominance (FOSD) condition if, for every agent i ∈ N and action index j ∈ [ℓ−1], the following +holds for all the comprehensive sets Ω′ ⊆ Ω: +� +ω∈Ω′ +Fi,aj+1,ω ≤ +� +ω∈Ω′ +Fi,aj,ω. +Remark 2. A condition similar to Definition 6, called monotone likelihood ratio property (MLRP), has been consid- +ered by D¨utting et al. (2019) limited to the case in which outcomes are identified by scalar values. In such settings, the +MLRP is strictly stronger than the FOSD condition. Our definition of FOSD generalizes the classical FOSD condition +(see, e.g., (Tadelis and Segal, 2005)) from settings in which the outcomes are scalar values to those where they are +vector values. +Next, we prove how to design a polynomial-time algorithm for the problem of finding an optimal contract by exploiting +the FOSD condition. Intuitively, the idea of the proof is the following. First, thanks to Theorem 1, we can reduce in +polynomial time an instance I := (N, A, Ω) of the principal-multi-agent problem to the optimization of a suitably- +defined set function f I over a 1-partition matroid MI (see Theorem 1 and Definition 4 for the definition of f I and +MI, respectively). Moreover, by Corollary 1, if f I is ordered-supermodular we can solve the optimization problem +in polynomial time. Hence, in order to prove the result, we simply need to show that, whenever the FOSD condition +is satisfied, the function f I is indeed ordered-supermodular. +First, we prove the following preliminary result which follows from (Østerdal, 2010). +Lemma 2. In principal-multi-agent problems with succinct rewards that satisfy the FOSD condition, for every agent +i ∈ N and pair aj, ak ∈ A of agent i’s actions such that j < k, there exists a collection of probability distributions +µω ∈ ∆Ω−, one per outcome ω ∈ Ω, which are supported on the finite subset of the positive orthant Ω− := Rm ++ ∩ +{ω − ω′ | ω, ω′ ∈ Ω} and satisfy the following equations: +Fi,ak,ω = +� +ω′∈Ω +Fi,aj,ω′ µω′ +ω−ω′ +∀ω ∈ Ω. +Given Lemma 2, we are ready to show that, if the instance I meets the FOSD condition, then its corresponding set +function f I is indeed ordered-supermodular over the 1-partition matroid MI. +Lemma 3. Given an instance I := (N, Ω, A) of principal-multi-agent problem that (i) has succinct rewards specified +by an IR-supermodular function and (ii) satisfies the FOSD condition, the set function f I defined over the 1-partition +matroid MI = ( +� +GI +i +� +i∈N , II) is ordered-supermodular. +Finally, Lemma 3 allows us to prove the main positive result of this section: +Theorem 4. For principal-multi-agent problem instances that (i) have succinct rewards specified by an IR- +supermodular function and (ii) satisfy the FOSD condition, the problem of computing an optimal contract admits +a polynomial-time algorithm. +9 + +ARXIV PREPRINT - FEBRUARY 1, 2023 +5 +Principal-Multi-Agent Problems with DR-submodular Rewards +In this section, we switch the attention to principal-multi-agent problems with succinct rewards specified by DR- +submodular functions. First, similarly to the case of IR-supermodular reward functions, we provide a strong negative +result. In particular, we show that the problem of computing an optimal contract cannot be approximated up to within +any constant factor, even when either the number of actions or the dimensionality of outcome vectors is fixed. +In order to prove the negative result, we provide a reduction from the promise version of the well-known +INDEPENDENT-SET problem. In such a version of the problem, one is given an undirected graph G := (V, E) +such that either there exists an independent set of size at least |V |1−α—for some α > 0—or all the independent sets +have size at most |V |α, and is asked to decide which one of the two cases holds. Such a problem is known to be NP- +hard for any α > 0 (H˚astad, 1999; Zuckerman, 2007). This is exploited by our reduction in order to prove Theorem 5. +The reader can find more details on the definition of the promise version of INDEPENDENT-SET in Appendix D. +Theorem 5. For any constant α > 0, in principal-multi-agent problems with succinct rewards specified by a DR- +submodular function, it is NP-hard to design a contract providing an n1−α–approximation of the principal’s expected +utility in an optimal contract, even when both the number of agents’ actions ℓ and the dimensionality q of outcome +vectors are fixed. +Next, we complement the inapproximability result in Theorem 5 by providing a polynomial-time approximation al- +gorithm for the problem. In order to do so, we exploit the fact that, in settings with succinct rewards specified by +DR-submodular functions, the set function f I constructed in Theorem 1 is always a submodular function over the +1-partition matroid MI. However, this is not sufficient, since such a function is non-monotone and non-positive, and, +thus, we need to deploy some non-standard tools in order to come up with a polynomial-time approximation algorithm. +As a first step, given an instance I := (N, Ω, A) of principal-multi-agent problem, we extend the definition of the +function f I to all the subsets of GI (notice that Theorem 1 provides a value of f I only for the independent sets I). To +do this, we first need to introduce some additional notation. +For ease of presentation, in the rest of this section we will make the following w.l.o.g. assumption: +Assumption 3 (Null outcome). There exists an outcome ω∅ ∈ Ω such that ω∅ = 0 ∈ Rq and, for every agent i ∈ N, +it holds that Fi,a∅,ω∅ = 1 and Fi,a,ω∅ = 0 for all a ∈ A \ {a∅}. +Then, by slightly abusing notation, given any S ⊆ GI we let Fi,S := � +(i,a)∈S∪{(i,a∅)|i∈N} Fi,a be the probability +distribution of the sum of independent random variables distributed as Fi,a, one for each pair (i, a) in S ∪ {(i, a∅) | +i ∈ N}. Notice that the probability distributions defined above are no longer supported on the set of outcomes Ω, but +rather on the set of all the possible vectors in Rq ++ that can be obtained as the sum of at most nℓ (possibly repeated) +vectors in Ω. We denote such a set by ˜Ω ⊆ Rq ++, and let Fi,S,ω be the probability that Fi,S assigns to ω ∈ ˜Ω. Moreover, +we let ˜Ωn :=×i∈N ˜Ω. Finally, we overload notation and let RS := � +ω∈˜Ωn rω +� +i∈N Fi,S,ωi for any S ⊆ GI. Notice +that, since any independent set S ∈ II includes at most one pair (i, a) for each agent i ∈ N, it is easy to check that +RS = RaS (see Section 4 for the definition of RaS). +We are now ready to provide the formal definition of the extension of f I: +Definition 7 (Extension of f I). Given an instance I := (N, Ω, A) of principal-multi-agent problem, the extension of +f I to all the subsets of GI is such that, for every S ⊆ GI: +f I(S) := RS − +� +(i,a)∈S +�Pi,a, +where +�Pi,a := min +p∈Pi,a +� +ω∈Ω +Fi,a,ω pi,ω. +The crucial result that we need in order to design a polynomial-time approximation algorithm is the following +Lemma 4, which shows that the extended function f I can be decomposed as the sum of a monotone-increasing +submodular function and a linear one. Formally: +Lemma 4. Given an instance I := (N, Ω, A) of principal-multi-agent problem with succinct rewards specified by a +DR-submodular function, the extended set function f I (see Definition 7) can be defined as f I(S) := fI(S) + lI(S) +for every S ⊆ GI, where fI : 2GI → R+ is a monotone-increasing submodular function and lI : 2GI → R is a linear +function, both defined over the 1-partition matroid MI. +Lemma 4 allows us to apply a result by Sviridenko et al. (2017), who provide a polynomial-time approximation algo- +rithm for the problem of optimizing the sum of a monotone-increasing submodular function and a linear one over a +matroid. This immediately gives the following result: +10 + +ARXIV PREPRINT - FEBRUARY 1, 2023 +Theorem 6. In principal-multi-agent problems with succinct rewards specified by a DR-submodular function, the +problem of computing an optimal contract admits a polynomial-time approximation algorithm that, for any ǫ > 0 +given as input, outputs a contract with principal’s expected utility at least (1 − 1/e)R(p,a∗) − P(p,a∗) − ǫ for any +contract (p, a∗) with high probability, where R(p,a∗) ∈ [0, 1], respectively P(p,a∗) ∈ R+, denotes the expected reward, +respectively payment, under (p, a∗). +6 +Bayesian Principal-multi-agent Problems +In this last section, we study Bayesian principal-multi-agent problems in which each agent has a private type determin- +ing their action costs and distributions over outcomes. In particular, we extend the Bayesian model recently introduced +by Castiglioni et al. (2022b) to multi-agent settings. +First, in Section 6.1 we formally introduce Bayesian principal-multi-agent problems and all their related concepts. +Then, Section 6.2 provides a formulation of the computational problem that the principal has to solve in Bayesian +settings. Next, in Section 6.3 we show how such a problem can be “approximately formulated” as an LP with +exponentially-many variables and polynomially-many constraints. Finally, in Section 6.4 we exploit such a formu- +lation to design a polynomial-time approximation algorithm for the problem, based on an ad hoc implementation +of the ellipsoid method that uses an approximate separation oracle that can be implemented in polynomial time in +settings having the same properties as those in which we derived our positive results in Sections 4 and 5. +6.1 +The Model +An instance of the Bayesian principal-multi-agent problem is characterized by a tuple (N, Θ, Ω, A), where N, Ω, and +A are defined as in non-Bayesian instances, while Θ is a finite set of agents’ types.10 We denote by θ ∈ Θn :=×i∈N Θ +a tuple of agents’ types, whose i-th component θi represents the type of agent i. We assume that agents’ types are +jointly determined according to a probability distribution λ ∈ ∆Θn supported on a subset supp(λ) ⊆ Θn of tuples of +agents’ types—with λθ being the probability assigned to θ ∈ supp(λ)—, and that such a distribution is commonly +known to the principal and all the agents.11 Action costs and distributions over outcomes are extended so that they also +depend on the agent’s type; formally, they are denoted as Fi,θ,a and ci,θ,a, where θ ∈ Θ is the type of agent i ∈ N. +Similarly, we extend the definition of expected reward, denoted as Rθ,a. Moreover, w.l.o.g., we modify Assumption 1 +so that the null action a∅ now satisfies ci,θ,a∅ = 0 for all i ∈ N and θ ∈ Θ. Finally, for an agent i ∈ N of type θ ∈ Θ, +we define A∗ +i,θ(p) ⊆ A as the set of actions that are IC under a given contract p ∈ Rn×m ++ +, while Pi,θ,a ⊆ Rn×m ++ +denotes the set of contracts under which a given action a ∈ A is IC. +Following the line of Castiglioni et al. (2022b), we consider the case in which the principal commits to a menu of +randomized contracts. In our multi-agent setting, a randomized contract is defined as a probability distribution γ +supported on Rn×m ++ +. Then, a menu consists in a collection Γ = (γθ)θ∈Θn containing a randomized contract γθ for +each possible tuple of agents’ types θ ∈ Θn. +The interaction between the principal and agents having types specified by θ ∼ λ goes as follows: +1. the principal commits to a menu of randomized contracts Γ = (γθ)θ∈Θn; +2. each agent i ∈ N reports a type ˆθi ∈ Θ to the principal (possibly different from their type θi); +3. the principal draws a contract p ∼ γ�θ, where �θ ∈ Θn denotes the tuple of agents’ types whose i-th component +is the type ˆθi reported by agent i; +4. each agent i ∈ N plays an IC action ai ∈ A∗ +i,θi(p) according to their true type θi, resulting in a tuple of +agents’ actions a ∈×i∈N A∗ +i,θi(p). +As discussed in Section 2 (see Remark 1), in our multi-agent setting a contract does not only need to specify payments, +but also action recommendations for the agents. Thus, in the rest of this section, whenever we refer to a contract +10For ease of exposition, all agents share the same set Θ. Our results can be easily extended to the case of agent-specific sets. +11Let us remark that, as it is the case for action costs and distributions over outcomes, as well as rewards, the probabilities defining +the distribution λ are part of the representation of a Bayesian principal-multi-agent problem instance, and, thus, they are part of +the input to the principal’s optimization problem. Hence, the running time of any polynomial-time algorithm for such a problem +must depend polynomially on the size of supp(λ). It is crucial that only probabilities λθ corresponding to tuples of agents’ types +θ ∈ supp(λ) in the support of λ are specified as input, otherwise the size of the input representation would always be exponential +in n, rendering the task of designing polynomial-time algorithms straightforward. +11 + +ARXIV PREPRINT - FEBRUARY 1, 2023 +p ∈ Rn×m ++ +belonging to the support of a randomized contract γθ for θ ∈ Θn, we always assume that it is paired with +a tuple a∗ ∈×i∈N A∗ +i,θi(p) of IC (for the types specified by θ) action recommendations for the agents. +In a Bayesian setting, the goal of the principal is to commit to an optimal menu of randomized contracts, which is +one maximizing their expected utility, which is obtained by extending the non-Bayesian expression in Definition 1 to +also account for the expectation with respect to the distribution λ of agents’ types and the distributions γθ defining the +randomized contracts in the menu (see Objective (1a) below for a formal mathematical formula). +As in single-agent settings (Castiglioni et al., 2022b), it is possible to focus w.l.o.g. on menus of randomized contracts +that are dominant-strategy incentive compatible (DSIC).12 These are menus such that the agents are always incen- +tivized to truthfully report their type to the principal, no matter the types reported by others (see Constraints (1b) for a +formalization of the DSIC conditions). +6.2 +Formulating the Principal’s Optimization Problem +Next, we show how to formulate the problem of computing an optimal DSIC menu of randomized contacts in Bayesian +principal-multi-agent problems. The formulation that we propose in the following is specifically tailored so as to ease +the design of our approximation algorithm. +As a first step, we show that we can focus w.l.o.g. on randomized contracts γθ having a finite support supp(γθ) ⊆ +Rn×m ++ +. Such a result is already known for single-agent settings (see Lemma 1 in (Castiglioni et al., 2022b) and +Theorem 1 in (Gan et al., 2022)), but it can be easily generalized to our multi-agent problems. In particular, since +in our model there are no externalities among the agents, it is immediate to adapt the results of Castiglioni et al. +(2022b) and Gan et al. (2022) in order to show that there always exists an optimal DSIC menu of randomized contracts +such that, for every agent i ∈ N and tuple of agents’ types θ ∈ Θn, the contracts in the support supp(γθ) of γθ specify +at most one different agent i’s payment scheme for each action a ∈ A. Moreover, such agent i’s payment scheme is +such that action a is IC when the type of agent i is θi. Formally: +Lemma 5. Given an instance I := (N, Θ, Ω, A) of Bayesian principal-multi-agent problem and a DSIC menu +of randomized contracts, there always exists another DSIC menu of randomized contracts Γ += +(γθ)θ∈Θn +with at least the same principal’s expected utility such that, for every i ∈ N and θ ∈ Θn, it holds that +��� +pi | p ∈ supp(γθ) ∧ p ∈ Pi,θi,a��� ≤ 1 for all a ∈ A, where pi ∈ Rm ++ denotes the i-th row of matrix p (i.e., +the agent i’s payment scheme under contract p). +Lemma 5 allows us to identify the contracts in the support supp(γθ) of γθ with their corresponding tuples of action +recommendations for the agents, since there could be at most one different contract for each one of such tuples. Thus, +in order to characterize the elements defining a menu of randomized contracts which are needed for our purposes, it is +sufficient to specify: +• for every tuple of agents’ types θ ∈ Θn and tuple of agents’ actions a ∈ An, the probability tθ,a ∈ [0, 1] +that the randomized contract γθ places on the contract whose corresponding action recommendations for the +agents are specified by a; +• for every agent i ∈ N, tuple of agents’ types θ ∈ Θn, and action a ∈ A, the probability ξi,θ,a ∈ [0, 1] with +which agent i is recommended to play action a after the agents collectively reported the types specified by θ +to the principal; +• for every agent i ∈ N, tuple of agents’ types θ ∈ Θn, action a ∈ A, and outcome ω ∈ Ω, the payment +pi,θ,a,ω ≥ 0 from the principal to agent i when the agents reported the types in θ to the principal, agent i is +recommended action a, and the realized outcome is ω. +We are now ready to provide our formulation of the problem of computing an optimal DSIC menu of randomized +contracts in Bayesian principal-multi-agent problems. Before doing that, for ease of presentation, we introduce some +additional notation. In particular, we let ˜Θn := supp(λ) be the set of tuples of agents’ types that could be possibly +reported to the principal if the agents truthfully reveal their types. Moreover, given any θ ∈ Θn, we let θ−i be the +tuple obtained by dropping agent i’s type θi from θ. Then, given a type θ ∈ Θ, we write (θ, θ−i) to denote the tuple +obtained by adding θ to θ−i as agent i’s type, so that θ = (θi, θ−i). Finally, for every agent i ∈ N, we denote by +12It is easy to show that focusing on DSIC menus of randomized contracts is w.l.o.g. by using a revelation-principle-style +argument. See the book by Shoham and Leyton-Brown (2008) for some examples of these kinds of arguments. +12 + +ARXIV PREPRINT - FEBRUARY 1, 2023 +˜Θ−i := +� +θ−i : θ ∈ ˜Θn� +the set of all tuples of types that could be possibly reported to the principal by agents other +than i, assuming that they truthfully reveal their types.13 +We can now formulate the principal’s optimization problem as follows: +sup +� +θ∈ ˜Θn +λθ +� +a∈An +tθ,a Rθ,a − +� +i∈N +� +θ∈ ˜Θn +λθ +� +a∈A +ξi,θ,a +� +ω∈Ω +Fi,θi,a,ω pi,θ,a,ω +s.t. +(1a) +� +a∈A +ξi,θ,a +�� +ω∈Ω +Fi,θi,a,ω pi,θ,a,ω − ci,θi,a +� +≥ +� +a∈A +ξi,(θ,θ−i),a max +a′∈A +�� +ω∈Ω +Fi,θi,a′,ω pi,(θ,θ−i),a,ω − ci,θi,a′ +� +∀i ∈ N, ∀θ ∈ ˜Θn, ∀θ ∈ Θ +(1b) +� +a∈A +ξi,(θ,θ−i),a = 1 +∀i ∈ N, ∀θ ∈ Θ, ∀θ−i ∈ ˜Θn +−i +(1c) +� +a∈An:ai=a +tθ,a = ξi,θ,a +∀i ∈ N, ∀θ ∈ ˜Θn, ∀a ∈ A +(1d) +tθ,a ≥ 0 +∀θ ∈ ˜Θn +−i, ∀a ∈ An +(1e) +ξi,(θ,θ−i),a ≥ 0 +∀i ∈ N, ∀θ ∈ Θ, ∀θ−i ∈ ˜Θn +−i, ∀a ∈ A +(1f) +pi,(θ,θ−i),a,ω ≥ 0 +∀i ∈ N, ∀θ ∈ Θ, ∀θ−i ∈ ˜Θn +−i, ∀a ∈ A, ∀ω ∈ Ω, +(1g) +where Objective (1a) is the principal’s expected utility for the menu of randomized contracts encoded by the variables +in the problem, Constraints (1b) specify the conditions ensuring that the menu is DSIC (for θ ̸= θi), as well as the +conditions guaranteeing that any action a such that ξi,θ,a > 0 is IC for an agent i of type θi under the payments defined +by variables pi,θ,a,ω, while Constraints (1c) and (1d) ensure that the menu of randomized contracts is well defined. +Notice that Problem (1) is defined in terms of sup rather than max. This is because, as shown in (Castiglioni et al., +2022b), even in single-agent settings the problem of computing an optimal DSIC menu of randomized contracts may +not admit a maximum. In the following, for ease of presentation, we let SUP be the optimal value of Problem (1) (i.e., +the value of the supremum). +6.3 +An “Approximately-optimal” LP Formulation +As a preliminary step towards the design of our approximation algorithm (see Section 6.4), we show how to find an +“approximately-optimal” DSIC menu of randomized contracts by solving an LP which features exponentially-many +variables and polynomially-many constraints. +In the following, we will make extensive use of the set Ai,θ ⊆ A of actions which are inducible for an agent i ∈ N +of type θ ∈ Θ. This is the set of all actions that are IC for an agent i of type θ under at least one contract; formally, +Ai,θ := +� +a ∈ A | ∃p ∈ Rn×m ++ +: a ∈ A∗ +i,θ(p) +� +. +First, we can prove the following useful result: +Lemma 6. There exists a function τ : N → R such that τ(x) is O(2poly(x))—with poly(x) being a polynomial in +x—and, for every instance I := (N, Θ, Ω, A) of Bayesian principal-multi-agent problem, agent i ∈ N, type θ ∈ Θ, +and inducible action a ∈ Ai,θ, there exists a contract p ∈ Rn×m ++ +such that a ∈ A∗ +i,θ(p) and pi,ω ≤ τ(|I|) for all +ω ∈ Ω, where |I| is the size of instance I.14 +Intuitively, Lemma 6 states that, if an action a is inducible for an agent i of type θ, then there exists a contract under +which such an action is IC and whose payments are “small”, in the sense that they can be represented with a number +of bits that is upper bounded by a quantity depending polynomially on the size of the problem instance. As we show +next, such a result is crucial for proving Theorem 7, as it allows to satisfactorily bound the principal’s expected utility +loss due to solving an LP rather than Problem (1). +13Using the sets ˜Θn and ˜Θn +−i to index the variables appearing in Problem (1) is crucial in order to guarantee that the number of +variables and that of constraints defining the problem is polynomial in the size of supp(λ). Indeed, indexing the variables over all +the tuples of agents’ types in Θn would lead to a number of variables and constraints exponential in n. +14In the rest of the section, we always assume that the size of a problem instance is expressed in terms of number of bits. +13 + +ARXIV PREPRINT - FEBRUARY 1, 2023 +Next, we formally introduce LP (2), which is obtained from Problem (1) by (i) replacing each product of two variables +ξi,θ,a pi,θ,a,ω with a single variable yi,θ,a,ω, (ii) considering the inducible actions in Ai,θ as the only actions available +to an agent i of type θ, and (iii) linearizing the max operator in Constraints (1b). By letting An,θ :=×i∈N Ai,θi for +every θ ∈ ˜Θn, we can write: +max +� +θ∈ ˜Θn +λθ +� +a∈An,θ +tθ,a Rθ,a − +� +i∈N +� +θ∈ ˜Θn +λθ +� +a∈Ai,θi +� +ω∈Ω +Fi,θi,a,ω yi,θ,a,ω +s.t. +(2a) +� +a∈Ai,θi +�� +ω∈Ω +yi,θ,a,ω Fi,θi,a,ω − ξi,θ,a ci,θi,a +� +≥ +� +a∈Ai,θ +γi,θ,θ,a ∀i ∈ N, ∀θ ∈ ˜Θn, ∀θ ∈ Θ +(2b) +γi,θ,θ,a ≥ +� +ω∈Ω +yi,(θ,θ−i),a,ω Fi,θi,a′,ω − ξi,(θ,θ−i),a ci,θi,a′ +∀i ∈ N, ∀θ ∈ ˜Θn, ∀θ ∈ Θ, ∀a ∈ Ai,θi, ∀a′ ∈ Ai,θi +(2c) +� +a∈Ai,θ +ξi,(θ,θ−i),a = 1 +∀i ∈ N, ∀θ ∈ Θ, ∀θ−i ∈ ˜Θn +−i +(2d) +� +a∈An,θ:ai=a +tθ,a = ξi,θ,a +∀i ∈ N, ∀θ ∈ ˜Θn, ∀a ∈ Ai,θi +(2e) +tθ,a ≥ 0 +∀θ ∈ ˜Θn, ∀a ∈ An,θ +(2f) +ξi,(θ,θ−i),a ≥ 0 +∀i ∈ N, ∀θ ∈ Θ, ∀θ−i ∈ ˜Θn +−i, ∀a ∈ Ai,θ +(2g) +yi,(θ,θ−i),a,ω ≥ 0 +∀i ∈ N, ∀θ ∈ Θ, ∀θ−i ∈ ˜Θn +−i, ∀a ∈ Ai,θ, ∀ω ∈ Ω +(2h) +γi,θ,θ,,a +∀i ∈ N, ∀θ ∈ ˜Θn, ∀θ ∈ Θ, ∀a ∈ Ai,θ. +(2i) +By letting LP be the optimal value of LP (2), the following lemma shows that such a value is always at least as large +as the value of the supremum defined in Problem (1). +Lemma 7. For every instance of Bayesian principal-multi-agent problem, it holds LP ≥ SUP. +Lemma 7 is proved by showing that, given any feasible solution to Problem (1), it is possible to recover a feasible +solution to LP (2) having the same objective function value. However, the converse is not true in general, i.e., given a +feasible solution to LP (2), it is not always possible to build a feasible solution to Problem (1) having at least the same +value. Thus, it might be the case that SUP < LP. This is caused by the existence of what we call irregular feasible +solutions to LP (2): +Definition 8. A feasible solution to LP (2) is said to be irregular if there exists an agent i ∈ N, a tuple of agents’ +types θ ∈ Θ, an inducible action a ∈ Ai,θi, and an outcome ω ∈ Ω such that yi,θ,a,ω > 0 and ξi,θ,a = 0. A feasible +solution to LP (2) is said to be regular if it is not irregular. +It is easy to see that, given a regular feasible solution to LP (2), we can recover a feasible solution to Problem (1) with +the same objective function value by simply letting pi,θ,a,ω = yi,θ,a,ω/ξi,θ,a for every i ∈ N, θ ∈ ˜Θn, a ∈ Ai,θi, +and ω ∈ Ω. However, the same is not true for irregular solutions, as the operation above is clearly ill defined in that +case. Nevertheless, we show that, given any irregular feasible solution to LP (2), it is always possible to build a regular +solution by only incurring in an arbitrarily small loss in objective function value. Formally: +Lemma 8. Given an instance I := (N, Θ, Ω, A) of Bayesian principal-multi-agent problem and an irregular solution +to LP (2) with value VAL, for any ǫ > 0, it is possible to recover a regular solution to LP (2) with value at least +VAL − ǫ(n τ(|I|) + 1) in time polynomial in |I| and 1 +ǫ, where τ is a function defined as per Lemma 6 and |I| denotes +the size of instance I. +Finally, we are ready to prove that solving LP (2) in place of Problem (1) allows us to recover in polynomial time a +DSIC menu of randomized contracts that only incurs in an arbitrarily small loss with respect to the value SUP of the +supremum of Problem (1). Formally: +Theorem 7. Given an instance I := (N, Θ, Ω, A) of Bayesian principal-multi-agent problem and an optimal solution +to LP (2), for any ǫ > 0, it is possible to recover a feasible solution to Problem (1) with value at least SUP−ǫ(n τ(|I|)+ +1) in time polynomial in |I| and 1 +ǫ, where τ is a function defined as per Lemma 6 and |I| denotes the size of instance +I. +14 + +ARXIV PREPRINT - FEBRUARY 1, 2023 +6.4 +Approximation Algorithm +LP (2) features exponentially-many variables and polynomially-many constraints, and, thus, it can be solved in poly- +nomial time by applying the ellipsoid method ot its dual, provided access to a suitable polynomial-time separation +oracle for the constraints of the dual (Gr¨otschel et al., 2012). +In this last section, we show that, despite an (exact) polynomial-time separation oracle may not be available in our +setting, it is always possible to design a polynomial-time approximate separation oracle. This, together with some ad +hoc modifications to the ellipsoid method, allows us to design the desired approximation algorithm for the problem +of interest. Indeed, in instances that satisfy the FOSD condition and have IR-supermodular succinct rewards, it is +possible to design an (exact) polynomial-time separation oracle. Instead, in instances having DR-submodular succinct +rewards, this is not possible, and, thus, we need an approximate separation oracle.15 +We start by introducing a relaxation of LP (2) (see LP (3) below) and by showing that the two LPs are indeed equivalent +(see Lemma 9 below). Such a preliminary step allows us to obtain a dual LP which has additional constraints on its +variables, which will be crucial in order to design a polynomial-time approximate separation oracle. The relaxation of +LP (2), which is obtained by replacing the ‘=’ in Constraints (2e) with a ‘≤’, reads as follows: +max +� +θ∈ ˜Θn +λθ +� +a∈An,θ +tθ,a Rθ,a − +� +i∈N +� +θ∈ ˜Θn +λθ +� +a∈Ai,θi +� +ω∈Ω +Fi,θi,a,ω yi,θ,a,ω +s.t. +(3a) +� +a∈An,θ:ai=a +tθ,a ≤ ξi,θ,a +∀i ∈ N, ∀θ ∈ ˜Θn, ∀a ∈ Ai,θi +(3b) +Constraints (2b)—(2d) and (2f)—(2i). +Lemma 9. For every instance of Bayesian principal-multi-agent problem, LP (2) and LP (3) have the same optimal +value. Moreover, given a feasible solution to LP (3), it is always possible to recover in polynomial time a feasible +solution to LP (2) having at least the same value. +By Lemma 9, we can solve LP (3) instead of LP (2). The dual problem of LP (3) reads as follows:16 +min +� +i∈N +� +θ∈Θ +� +θ−i∈ ˜Θn +−i +xi,θ,θ−i +s.t. +(4a) +−1 +� +(θ, θ−i) ∈ ˜Θn� � +θ′∈Θ +yi,(θ,θ−i),θ′ ci,θ,a + +� +θ′∈Θ: +(θ′,θ−i)∈ ˜Θn +� +a′∈Ai,θ′ +ci,θ′,a′ zi,(θ′,θ−i),θ,a,a′ ++di,θ,θ−i − 1 +� +(θ, θ−i) ∈ ˜Θn� +xi,θ,θ−i ≥ 0 +∀i ∈ N, ∀θ ∈ Θ, ∀θ−i ∈ ˜Θn +−i, ∀a ∈ Ai,θ +(4b) +−yi,θ,θ + +� +a′∈Ai,θi +zi,θ,θ,a,a′ ≥ 0 +∀i ∈ N, ∀θ ∈ ˜Θn, ∀θ ∈ Θ, ∀a ∈ Ai,θ +(4c) +1 +� +(θ, θ−i) ∈ ˜Θn� � +θ′∈�� +yi,(θ,θ−i),θ′ Fi,θ,a,ω − +� +θ′∈Θ: +(θ′,θ−i)∈ ˜Θn +� +a′∈Ai,θ′ +Fi,θ′,a′,ω zi,(θ′,θ−i),θ,a,a′ ≥ +−1 +� +(θ, θ−i) ∈ ˜Θn� +λ(θ,θ−i) Fi,θ,a,ω +∀i ∈ N, ∀θ ∈ Θ, ∀θ−i ∈ ˜Θn +−i, ∀a ∈ Ai,θ, ∀ω ∈ Ω +(4d) +� +i∈N +yi,θ,ai ≥ λθ Rθ,a +∀θ ∈ ˜Θn, ∀a ∈ An,θ +(4e) +xi,θ,θ−i +∀i ∈ N, ∀θ ∈ θ, ∀θ−i ∈ ˜Θn +−i +(4f) +yi,θ,a ≥ 0 +∀i ∈ N, ∀θ ∈ ˜Θn, ∀a ∈ Ai,θi +(4g) +zi,θ,θ,a,a′ ≤ 0 +∀i ∈ N, ∀θ ∈ ˜Θn, ∀θ ∈ Θ, ∀a ∈ Ai,θi, ∀a′ ∈ Ai,θi +(4h) +di,θ,θ−i +∀i ∈ N, ∀θ ∈ Θ, ∀θ−i ∈ ˜Θn +−i, +(4i) +where xi,θ,θ−i are dual variables that correspond to Constraints (2d), yi,θ,a to Constraints (2e), zi,θ,θ,a,a′ to Con- +straints (2c), while di,θ,θ−i to Constraints (2b). +15Notice that the existence of an exact oracle for instances with DR-submodular rewards would contradict Theorem 5. +16Notice that, in LP (4), we used 1 {·} to the denote the indicator function for the event written within curly braces +15 + +ARXIV PREPRINT - FEBRUARY 1, 2023 +The dual LP (4) features polynomially-many variables and exponentially-many constraints.17 +Moreover, Con- +straints (4e) are the only ones which are exponential in the size of the problem instance, since there is a group of +such constraints for every tuple of agents’ actions a ∈ An,θ. Thus, in order to have the ellipsoid method running in +polynomial time on LP (4), it is sufficient to design a polynomial-time separation oracle for Constraints (4e), as the +others can be checked one by one in polynomial time. As we show next, only an “approximate version” of such a +separation oracle can be implemented in polynomial time, according to the following definition. +Definition 9 (Approximate separation oracle). Given any α ∈ (0, 1], an approximate separation oracle for Con- +straints (4e) is a procedure Oα(·, ·, ·, ·) which, given in input an instance I := (N, Θ, Ω, A) of Bayesian principal- +multi-agent problem, a tuple of agents’ types θ ∈ ˜Θn, a vector w ∈ Rnℓ of weights—with wi,a denoting the vector +component corresponding to agent i ∈ N and action a ∈ Ai,θi—, and an additive error ǫ > 0, returns a tuple of +agents’ actions a ∈ An,θ such that: +λθ Rθ,a − +� +i∈N +wi,ai ≥ α λθ Rθ,a′ − +� +i∈N +wi,a′ +i − ǫ +∀a′ ∈ An,θ, +in time polynomial in |I|, maxi∈N,a∈A |wi,a|, and 1 +ǫ , where |I| denotes the size of instance I. +Notice that, by letting each weight wi,a be equal to yi,θ,a for some feasible solution to LP (4), the problem solved in +a call Oα(I, w, θ, ǫ) to the approximate separation oracle intuitively consists in finding the most violated constraint +among Constraints (4e), up to a reward-multiplying approximation factor α and an additive error ǫ given as input. +Next, we show that it is possible to apply an ad hoc implementation of the ellipsoid method to LP (4), which, given +access to an approximate separation oracle for Constraints (4e) as in Definition 9, returns a feasible solution to LP (4) +that provides a desirable approximation of the optimal value of LP (4). Such a procedure can be embedded in a +suitable binary search scheme, resulting in a polynomial-time approximation algorithm for the principal’s optimization +problem. +Theorem 8. Given access to an approximate separation oracle Oα(·, ·, ·, ·) with α ∈ (0, 1], there exists an algorithm +that, given any ρ > 0 and instance of Bayesian principal-multi-agent problem as input, returns a DSIC menu of +randomized contracts with principal’s expected utility at least α RΓ − PΓ − ρ for every menu of randomized contracts +Γ = {γθ}θ∈Θn, where RΓ ∈ [0, 1], respectively PΓ ∈ R+, denotes the expected reward, respectively the expected +overall payment, of Γ. Moreover, such an algorithm runs in time polynomial in the instance size and 1 +ρ. +We conclude the section by showing that the approximate separation oracle Oα(·, ·, ·, ·) can be implemented in poly- +nomial time for two classes of Bayesian principal-multi-agent problems. This is the last step needed to fully specify +the approximation algorithm introduced in Theorem 8. +In Bayesian principal-multi-agent problem instances that satisfy the FOSD condition and have IR-supermodular suc- +cinct rewards, we are able to design a polynomial-time approximate separation oracle Oα(·, ·, ·, ·) with α = 1.18 +Instead, in instances with DR-submodular succinct rewards, we sow to implement an oracle Oα(·, ·, ·, ·) with +α = 1−1/e. These implementations work by solving suitably-defined problems that resemble non-Bayesian principal- +multi-agent instances. In particular, they have their same structure, while the rewards are scaled by a factor λθ and the +values �Pi,a are substituted by the weights wi,a. Formally, we get the following two results: +Corollary 3. In Bayesian principal-multi-agent problem instances that (i) have succinct rewards specified by an IR- +supermodular function and (ii) satisfy the FOSD condition, for any ρ > 0, the problem of computing an optimal menu +of randomized contracts admits an algorithm returning a menu with principal’s expected utility at least OPT − ρ in +time polynomial in the instance size and 1 +ρ, where OPT is the value of the optimal principal’s expected utility. +Corollary 4. In Bayesian principal-multi-agent problem instances with succinct rewards specified by a DR- +submodular function, the problem of computing an optimal menu of randomized contracts admits a polynomial-time +approximation algorithm which, for any ǫ > 0 given as input, outputs a menu providing the principal with an expected +utility at least of (1 − 1/e)RΓ − PΓ − ǫ for each menu of randomized contracts Γ = {γθ}θ∈Θn with high probability, +where RΓ ∈ [0, 1], respectively PΓ ∈ R+, denotes the expected reward, respectively the expected payment, in contract +p. +17We recall that the distribution λ is part of the problem instance given as input to our algorithm, and, thus, both |˜Θn| and |˜Θn +−i| +are polynomial quantities in the size of such instance. +18Notice that, in Bayesian principal-multi-agent problem instances that satisfy the FOSD condition and have IR-supermodular +succinct rewards, it is easy to adapt our results so as to show that there exists an exact separation oracle (thus getting rid of +the additive error ǫ > 0). We decided to use an approximate separation oracle anyway, for ease of exposition. This choice +does not detriment the final approximation guarantees of the algorithm (see Corollary 4), since we cannot get rid of the additive +approximation ρ > 0 given that Problem (1) may not admit a maximum. +16 + +ARXIV PREPRINT - FEBRUARY 1, 2023 +Notice that Corollary 4 provides the same approximation guarantees of its corresponding result for non-Bayesian +instances (see Theorem 6), while Corollary 3 matches those of its corresponding non-Bayesian result up to an additive +error ρ > 0 (see Theorem 4). +17 + +ARXIV PREPRINT - FEBRUARY 1, 2023 +References +Tal Alon, Paul D¨utting, and Inbal Talgam-Cohen. 2021. Contracts with Private Cost per Unit-of-Effort. In Proceedings +of the 22nd ACM Conference on Economics and Computation. 52–69. +Sanjeev Arora, Carsten Lund, Rajeev Motwani, Madhu Sudan, and Mario Szegedy. 1998. Proof verification and the +hardness of approximation problems. Journal of the ACM (JACM) 45, 3 (1998), 501–555. +Moshe Babaioff, Michal Feldman, and Noam Nisan. 2006. Combinatorial agency. In Proceedings of the 7th ACM +Conference on Electronic Commerce. 18–28. +Moshe Babaioff, Michal Feldman, and Noam Nisan. 2009. Free-riding and free-labor in combinatorial agency. In +International Symposium on Algorithmic Game Theory. Springer, 109–121. +Moshe Babaioff, Michal Feldman, and Noam Nisan. 2010. Mixed strategies in combinatorial agency. Journal of +Artificial Intelligence Research 38 (2010), 339–369. +Moshe Babaioff, Michal Feldman, Noam Nisan, and Eyal Winter. 2012. Combinatorial agency. Journal of Economic +Theory 147, 3 (2012), 999–1034. +Moshe Babaioff and Eyal Winter. 2014. Contract complexity. EC 14 (2014), 911. +Francis Bach. 2019. Submodular functions: from discrete to continuous domains. Mathematical Programming 175, 1 +(2019), 419–459. +Hamsa Bastani, Mohsen Bayati, Mark Braverman, Ramki Gummadi, and Ramesh Johari. 2016. Analysis of medicare +pay-for-performance contracts. Available at SSRN 2839143 (2016). +Dimitris Bertsimas and John N Tsitsiklis. 1997. Introduction to linear optimization. Vol. 6. Athena scientific Belmont, +MA. +Andrew An Bian, Baharan Mirzasoleiman, Joachim Buhmann, and Andreas Krause. 2017. Guaranteed Non-convex +Optimization: Submodular Maximization over Continuous Domains. In Proceedings of the 20th International Con- +ference on Artificial Intelligence and Statistics (Proceedings of Machine Learning Research), Aarti Singh and Jerry +Zhu (Eds.), Vol. 54. PMLR, 111–120. https://proceedings.mlr.press/v54/bian17a.html +Garrett Birkhoff. 1937. Rings of sets. Duke Mathematical Journal 3, 3 (1937), 443–454. +Gabriel Carroll. 2015. Robustness and linear contracts. American Economic Review 105, 2 (2015), 536–63. +Matteo Castiglioni, Alberto Marchesi, and Nicola Gatti. 2022a. Bayesian agency: Linear versus tractable contracts. +Artificial Intelligence 307 (2022). +Matteo Castiglioni, Alberto Marchesi, and Nicola Gatti. 2022b. Designing Menus of Contracts Efficiently: The Power +of Randomization. https://doi.org/10.48550/ARXIV.2202.10966 +Matteo Castiglioni, Alberto Marchesi, and Nicola Gatti. 2022c. Designing Menus of Contracts Efficiently: The Power +of Randomization. In EC ’22: The 23rd ACM Conference on Economics and Computation. 705–735. +Lin William Cong and Zhiguo He. 2019. Blockchain disruption and smart contracts. The Review of Financial Studies +32, 5 (2019), 1754–1797. +Paul Duetting, Tomer Ezra, Michal Feldman, and Thomas Kesselheim. 2022. Multi-Agent Contracts. arXiv preprint +arXiv:2211.05434 (2022). +Paul D¨utting, Tomer Ezra, Michal Feldman, and Thomas Kesselheim. 2022. Combinatorial contracts. In 2021 IEEE +62nd Annual Symposium on Foundations of Computer Science (FOCS). IEEE, 815–826. +Paul D¨utting, Tim Roughgarden, and Inbal Talgam-Cohen. 2019. Simple versus optimal contracts. In Proceedings of +the 2019 ACM Conference on Economics and Computation. 369–387. +Paul Dutting, Tim Roughgarden, and Inbal Talgam-Cohen. 2021. The complexity of contracts. SIAM J. Comput. 50, +1 (2021), 211–254. +Yuval Emek and Michal Feldman. 2012. Computing optimal contracts in combinatorial agencies. Theoretical Com- +puter Science 452 (2012), 56–74. +Jiarui Gan, Minbiao Han, Jibang Wu, and Haifeng Xu. 2022. Optimal Coordination in Generalized Principal-Agent +Problems: A Revisit and Extensions. arXiv preprint arXiv:2209.01146 (2022). +Martin Gr¨otschel, L´aszl´o Lov´asz, and Alexander Schrijver. 2012. Geometric algorithms and combinatorial optimiza- +tion. Vol. 2. Springer Science & Business Media. +Guru Guruganesh, Jon Schneider, and Joshua R Wang. 2021. Contracts under moral hazard and adverse selection. In +EC ’21: The 22nd ACM Conference on Economics and Computation. 563–582. +18 + +ARXIV PREPRINT - FEBRUARY 1, 2023 +Johan H˚astad. 1999. Clique is hard to approximate within n1−ǫ. Acta Mathematica 182, 1 (1999), 105–142. +Chien-Ju Ho, Aleksandrs Slivkins, and Jennifer Wortman Vaughan. 2016. Adaptive contract design for crowdsourcing +markets: Bandit algorithms for repeated principal-agent problems. Journal of Artificial Intelligence Research 55 +(2016), 317–359. +Ran Raz. 1998. A parallel repetition theorem. SIAM J. Comput. 27, 3 (1998), 763–803. +Alexander Schrijver. 2000. A combinatorial algorithm minimizing submodular functions in strongly polynomial time. +Journal of Combinatorial Theory, Series B 80, 2 (2000), 346–355. +Alexander Schrijver et al. 2003. Combinatorial optimization: polyhedra and efficiency. Vol. 24. Springer. +Yoav Shoham and Kevin Leyton-Brown. 2008. Multiagent systems: Algorithmic, game-theoretic, and logical founda- +tions. Cambridge University Press. +Maxim Sviridenko, Jan Vondr´ak, and Justin Ward. 2017. Optimal approximation for submodular and supermodular +optimization with bounded curvature. Mathematics of Operations Research 42, 4 (2017), 1197–1218. +Steve Tadelis and Ilya Segal. 2005. Lectures in contract theory. Lecture notes for UC Berkeley and Stanford University +(2005). +David Zuckerman. 2007. Linear Degree Extractors and the Inapproximability of Max Clique and Chromatic Number. +Theory of Computing 3, 6 (2007), 103–128. +Lars Peter Østerdal. 2010. +The mass transfer approach to multivariate discrete first order stochastic dom- +inance: +Direct proof and implications. +Journal of Mathematical Economics 46, 6 (2010), 1222–1228. +https://doi.org/10.1016/j.jmateco.2010.08.018 The Conferences at Barcelona, Milan, New Haven, San +Diego and Tokyo. +19 + +ARXIV PREPRINT - FEBRUARY 1, 2023 +A +Proofs Omitted from Section 3 +Lemma 1. There always exists a base S∗ ∈ B(MI) of MI such that f I(S∗) = maxS∈II f I(S). +Proof. Let S ∈ II be any independent set of MI. Clearly, by adding to S all the ground elements (i, a∅) ∈ GI +i for +i ∈ N \ NS, we obtain a base S′ ∈ II. By definition of the null action a∅, it holds �Pi,aS,i = �Pi,a∅ = 0 for every +i ∈ N \ NS, which implies f I(S) = f I(S′), since RaS = RaS′ given that aS = aS′. This concludes the proof. +Theorem 1. Given an instance I := (N, Ω, A) of principal-multi-agent problem, the problem of computing a contract +maximizing the principal’s expected utility can be reduced in polynomial time to solving maxS∈II f I(S) over the +1-partition matroid MI = ( +� +GI +i +� +i∈N , II), where f I : 2GI → R is a set function such that, for every independent set +S ∈ II, it holds: +f I(S) := RaS − +� +i∈N +�Pi,aS,i, +where +�Pi,aS,i = +min +p∈Pi,aS,i +� +ω∈Ω +Fi,aS,i,ω pi,ω. +Proof. We prove the result by showing that, given any pair (p, a)—where p ∈ Rn×m ++ +is a contract and a = (ai)i∈N ∈ +×i∈N A∗ +i (p) is a tuple of IC agents’ actions recommended to the agents—, there exists a base S ∈ II of MI such that +f I(S) is greater than or equal to the principal’s expected utility under (p, a), and, conversely, given any base S ∈ II +there exists a pair (p, a) with principal’s expected utility f I(S). This, together with Lemma 1, proves the result. +From (p, a) to a base. +Let the base S ∈ II be defined so that S := {(i, ai) : i ∈ N}. Then, given that ai ∈ A∗ +i (p) +for all i ∈ N and by the definition of �Pi,aS,i, it holds: +f I(S) = RaS − +� +i∈N +�Pi,aS,i ≥ Ra − +� +i∈N +� +ω∈Ω +Fi,ai,ω pi,ω = Ra − +� +i∈N +Pi,ai, +where the inequality holds since aS = a and the fact that, for every i ∈ N, the value �Pi,aS,i is defined as a minimum +taken over the set Pi,aS,i, which contains the contract p given that ai ∈ A∗ +i (p). +From a base to (p, a). +Given a base S ∈ II of the matroid MI, let (p, a) be such that a = (ai)i∈N satisfies +(i, ai) ∈ S for all i ∈ N and p ∈ arg minp′∈Pi,aS,i +� +ω∈Ω Fi,aS,i,ω p′ +i,ω for every i ∈ N (notice that such a contract +can be built by defining the components pi,ω for ω ∈ Ω independently for each i ∈ N). Then, it immediately follows +from the definition of the function f I that the principal’s expected utility under (p, a) is equal to f I(S). +Theorem 2. The problem of maximizing an ordered-supermodular function over a 1-partition matroid can be reduced +in polynomial time to maximizing a supermodular function over a ring of sets. +Proof. Given a 1-partition matroid M := ({Gi}i∈[d] , I) and a function f : 2G → R that is ordered-supermodular, we +show that maximizing f over M is equivalent to maximizing a suitably-defined supermodular function ˜f : R → R +over a particular ring of sets R. The latter is defined by the family of all the sets S ⊆ G such that, if x ∈ S and +x = π−1 +i +(j) for some i ∈ [d] and j ∈ [ki], then πi(l) ∈ S for all l ∈ [ki] : l < j. Moreover, for every S ⊆ G, we let +˜f(S) := f(∧S), where ∧S denotes the set obtained by taking an element x ∈ S with maximal value of π−1 +i +(x) for +each partition i ∈ [d]. Then, it is sufficient to show that ˜f is supermodular. Indeed, given two sets S, S′ ⊆ G, it holds: +˜f(S) + ˜f(S′) = f(∧S) + f(∧S′) ≤ f(∧(S ∪ S′)) + f(∧(S ∩ S′)) = ˜f(S ∪ S′) + ˜f(S ∩ S′), +which concludes the proof. +Corollary 1. The problem of maximizing an ordered-supermodular function over a 1-partition matroid admits a +polynomial-time algorithm. +Proof. The problem can be reduced in polynomial time to the maximization of a supermodular function defined over +a ring of sets by Theorem 2. Such a problem is known to be solvable in polynomial time; see, e.g., (Schrijver, 2000; +Bach, 2019). +20 + +ARXIV PREPRINT - FEBRUARY 1, 2023 +B +Proof of Theorem 3 +In order to prove the theorem, we employ a reduction from a promise problem associated with LABEL-COVER +instances, whose definition follows. +Definition 10 (LABEL-COVER instance). An instance of LABEL-COVER is a tuple (G, Σ, Π): +• G := (U, V, E) is a bipartite graph defined by two disjoint sets of nodes U and V , connected by the edges in +E ⊆ U × V , which are such that all the nodes in U have the same degree; +• Σ is a finite set of labels; and +• Π := {Πe : Σ → Σ | e ∈ E} is a finite set of edge constraints. +Moreover, a labeling of the graph G is a mapping π : U ∪ V → Σ that assigns a label to each vertex of G such that +all the edge constraints are satisfied. Formally, a labeling π satisfies the constraint for an edge e = (u, v) ∈ E if it +holds that π(v) = Πe(π(u)). +The classical LABEL-COVER problem is the search problem of finding a valid labeling for a LABEL-COVER in- +stance given as input. In the following, we consider a different version of the problem, which is the promise problem +associated with LABEL-COVER instances. +Definition 11 (GAP-LABEL-COVERc,s). For any pair of numbers 0 ≤ s ≤ c ≤ 1, we define GAP-LABEL- +COVERc,s as the following promise problem. +• Input: An instance (G, Σ, Π) of LABEL-COVER such that either one of the following is true: +– there exists a labeling π that satisfies at least a fraction c of the edge constraints in Π; +– any labeling π satisfies less than a fraction s of the edge constraints in Π. +• Output: Determine which of the above two cases hold. +To prove Theorem 3, we use the following result due to Raz (1998) and Arora et al. (1998). +Theorem 9 (Raz (1998); Arora et al. (1998)). For any ǫ > 0, there exists a constant kǫ ∈ N that depends on ǫ such +that the promise problem GAP-LABEL-COVER1,ǫ restricted to inputs (G, Σ, Π) with |Σ| = kǫ is NP-hard. +Now, we are ready to prove Theorem 3. +Proof of Theorem 3. Given an approximation factor ρ > 0, we reduce from the problem GAP-LABEL-COVER1,ρ. +Our construction is such that, if the LABEL-COVER instance admits a labeling that satisfies all the edge constraints, +then the corresponding principal-multi-agent problem admits a contract providing the principal with an overall ex- +pected utility of at least 1. Otherwise, if at most a fraction ρ of the constraints are satisfied, then any contract provides +the principal with an overall expected utility of at most ρ. Since ρ > 0 can be an arbitrarily small constant, this is +sufficient to prove the statement. +Construction. +Given an instance of GAP-LABEL-COVER1,ρ (G, Σ, Π) with a bipartite graph G = (U, V, E), we +build a principal-multi-agent instance as follows. The set of agents includes an agent nv for every node v ∈ U ∪ V of +G. The outcome space has kρ dimensions, i.e., Ω = Rkρ ++ . Each agent nv, v ∈ V ∪ U has an action aσ for each label +σ ∈ Σ. Given an label σ, let ωσ ∈ Rkρ ++ be the outcome with ωσ +σ = 1 and ωσ +σ′ = 0 for each ω′ ̸= σ, where for ease of +exposition we rename the set Σ as {1, . . ., kρ}. For each agent nv and each action aσ, with σ ∈ Σ, cost cn,aσ = 0 and +aσ induces the outcome ωσ deterministically, i.e., Fn,aσ,ωσ = 1. Finally, the principal’s reward function g is defined +as follows. For each vector ω ∈ Ωn, +g(ω) = +� +(v,u)∈E +� +σ∈Σ +1{ωnv,σ = 1 ∧ ωnu,Πe(σ) = 1}/|E|. +It is easy to see that the function is IR-supermodular in [0, 1]n|Σ| and hence for all the inducible outcomes ω.19 +19It is easy to construct an arbitrary good approximation of g(·) that is IR-supermodular on all the domain Rn|Σ| ++ +. For instance, +we can set g(ω) = eM(ωnv,σ+ωnu,Πe(σ)−2)/|E| for an arbitrary large M. +21 + +ARXIV PREPRINT - FEBRUARY 1, 2023 +Completeness. +Suppose that the instance of GAP-LABEL-COVER1,ρ (G, Σ, Π) admits a labeling π : U ∪ V → Σ +that satisfies all the edge constraints in Π. Let us define a contract that recommends action aπ(v) for every node +v ∈ U ∪ V , while all the payments are set to 0, i.e., pn,ω = 0 for each n ∈ N and ω ∈ Ω. Notice that the agents +follow the recommendations since they are indifferent among all the actions. It is easy to see that the utility is 1 since +for each edge (u, v), ωnv,π(v) = 1 and ωnu,Πe(π(u)) = 1. This concludes the first part of the proof. +Soundness. +We show that, if the LABEL-COVER instance is such that every labeling π : U ∪ V → Σ satisfies at +most a fraction ρ of the edge constraints in Π, then, in the corresponding principal-agent setting, any contract provides +the principal with an expected utility at most ρ. +Let ˆa be the tuple of action recommendations and recall that each action ˆanv, v ∈ V ∪ U induces deterministically an +outcome ωnv ∈ {ωσ}σ∈Σ. As a first step, notice that for each edge e = (u, v), � +σ∈Σ 1(ωnv,σ = 1 ∧ ωnu,Πe(σ) = +1)/|E| is at most 1/|E| since there is exactly one σ such that ωnv,σ = 1. Suppose by contradiction that there exists +a contract with utility strictly larger than ρ. Then, there are strictly more than ρ|E| edges such that � +σ∈Σ 1(ωnv,σ = +1 ∧ ωnu,Πe(σ) = 1) = 1. Consider the assignment that assign to each variable v ∈ V ∪ U the label σ such that +ˆanv = aσ. It is easy to see that this assignment satisfies strictly more than a ρ fraction of the edges, reaching a +contradiction. +C +Proofs Omitted from Section 4 +To prove the results in this section it will be useful to employ the definition of supermodularity for continuous func- +tions. Indeed, the properties introduced in Definition 2 are special cases of the classical submodularity and super- +modularity properties which are usually considered in the literature. Formally, by letting max{ω, ω}, respectively +min{ω, ω′}, be the component-wise maximum, respectively minimum, between two given vectors ω, ω′ ∈ Rnq ++ , the +following definition holds: +Definition 12. A reward function g : Rnq ++ → R is submodular if the following holds: +g(ω) + g(ω′) ≥ g(max{ω, ω}) + g(min{ω, ω}) +∀ω, ω′ ∈ Rnq ++ . +Moreover, a reward function g : Rnq ++ → R is supermodular if its opposite function −g is submodular. +It is well known that any DR-submodular, respectively IR-supermodular, function is also submodular, respectively +supermodular, but the converse is not true (Bian et al., 2017). +Lemma 2. In principal-multi-agent problems with succinct rewards that satisfy the FOSD condition, for every agent +i ∈ N and pair aj, ak ∈ A of agent i’s actions such that j < k, there exists a collection of probability distributions +µω ∈ ∆Ω−, one per outcome ω ∈ Ω, which are supported on the finite subset of the positive orthant Ω− := Rm ++ ∩ +{ω − ω′ | ω, ω′ ∈ Ω} and satisfy the following equations: +Fi,ak,ω = +� +ω′∈Ω +Fi,aj,ω′ µω′ +ω−ω′ +∀ω ∈ Ω. +Proof. The proof follows from Theorem 1 in (Østerdal, 2010). In particular, for every agent i ∈ N and action index +j ∈ [ℓ − 1], given that the FOSD condition ensures that � +ω∈Ω′ Fi,aj+1,ω ≤ � +ω∈Ω′ Fi,aj,ω for all comprehensive sets +Ω′ ⊆ Ω, Theorem 1 in (Østerdal, 2010) states that Fi,aj can be derived from Fi,aj+1 by means of a finite sequence +of deteriorating bilateral transfers (of mass). These are operations which consist in moving probability mass from an +outcome ω ∈ Ω to another outcome ω′ ∈ Ω such that ω′ ≤ ω, while maintaining the probability mass on outcomes +that are different from ω and ω′ untouched. As a result, given an agent i ∈ N and a pair aj, ak ∈ A of agent i’s +actions such that j < k, it is easy to check that the probability Fi,ak,ω which Fi,ak places on an outcome ω ∈ Ω can be +expressed as a suitable combinations of the probabilities Fi,aj,ω′ which Fi,aj,ω′ places on outcomes ω′ ∈ Ω such that +ω′ ≤ ω (since all the bilateral transfers involved in the processes of turning Fi,ak into Fi,aj are deteriorating). This +concludes the proof. +Lemma 3. Given an instance I := (N, Ω, A) of principal-multi-agent problem that (i) has succinct rewards specified +by an IR-supermodular function and (ii) satisfies the FOSD condition, the set function f I defined over the 1-partition +matroid MI = ( +� +GI +i +� +i∈N , II) is ordered-supermodular. +Proof. In order to show the result, we prove that, for every pair of independent sets S, S′ ∈ II: +f I(S ∧ S′) + f I(S ∨ S′) ≥ f I(S) + f I(S′), +22 + +ARXIV PREPRINT - FEBRUARY 1, 2023 +where the partition-wise “maximum” ∧ and “minimum” ∨ are defined with respect to the bijective functions πi : +[ki] → GI +i (with ki = ℓ) constructed according to the (agent-dependent) ordering of the action set A. In particular, for +every i ∈ N and j ∈ [ℓ], it holds πi(j) = (i, aj). +For ease of presentation, in the rest of the proof we let a1 := aS∧S′ and a2 := aS∨S′, so that a1,i, respectively a2,i, +denotes the i-th component of a1, respectively a2. +First, let us notice that � +i∈N �Pi,a1,i +� +i∈N �Pi,a2,i = � +i∈N �Pi,aS,i +� +i∈N �Pi,aS′,i, which holds since, by definition +of partition-wise “maximum” ∧ and “minimum” ∨, for every agent i ∈ N the pair of actions a1,i, a2,i exactly coincides +(up to ordering) with aS,i, aS′,i. Thus, given the definition of f I (see Theorem 1), in order to prove the result it is +sufficient to prove that Ra1 + Ra2 ≥ RaS + RaS′ . +By definition of a1 and a2, we have that π−1 +i +(i, a1,i) ≥ π−1 +i +(i, a2,i) for every i ∈ N. Then, thanks to Lemma 2 and +how actions are ordered, for every agent i ∈ N, there exists a collection of probability distributions µi,ω ∈ ∆Ω−, one +per outcome ω ∈ Ω, such that Fi,a1,i,ω = � +ω′∈Ω Fi,a2,i,ω′µi,ω′ +ω−ω′, where we recall that the µi,ω are the probability +distributions that allow to turn Fi,a2,i into Fi,a1,i. Moreover, notice that, whenever a1,i = a2,i, it holds µi,ω +0 += 1 for +every ω ∈ Ω. Then, we can write: +Ra1 = +� +ω∈Ωn +�� +i∈N +Fi,a1,i,ωi +� +g(ω) = +� +ω∈Ω +� +i∈N +� � +ω′∈Ω +Fi,a2,i,ω′µi,ω′ +ω−ω′ +� +g(ω), +RaS = +� +ω∈Ωn +�� +i∈N +Fi,aS,i,ωi +� +g(ω) += +� +ω∈Ωn + + + +� +i∈N: +aS,i=a1,i +� +ω′∈Ω +Fi,a2,i,ω′µω′ +ω−ω′ + + + + + + + +� +i∈N: +aS,i̸=a1,i +Fi,a2,i,ωi + + + + g(ω), +and +RaS′ = +� +ω∈Ωn +�� +i∈N +Fi,aS′,i,ωi +� +g(ω) += +� +ω∈Ωn + + + + +� +i∈N: +aS′,i=a1,i +� +ω′∈Ω +Fi,a2,i,ω′µω′ +ω−ω′ + + + + + + + + +� +i∈N: +aS′,i̸=a1,i +Fi,a2,i,ωi + + + + g(ω). +In the following, for ease of presentation, given a pair of tuples of agents’ outcomes ω, ω′ ∈ Ωn such that ωi ≥ ω′ +i for +every i ∈ N, we denote by ωω,ω′ +1 +, ωω,ω′ +2 +∈ Ωn another pair of tuples of agents’ outcomes, which depend on ω, ω′ +and are defined as follows: +• if agent i ∈ N is such that a1,i = aS,i and a2,i = aS′,i, then it holds ωω,ω′ +1,i += ωi and ωω,ω′ +2,i += ω′ +i; +• if agent i ∈ N is such that a1,i = aS′,i and a2,i = aS,i, then it holds ωω,ω′ +1,i += ω′ +i and ωω,ω′ +2,i += ωi. +Notice that, as it is easy to check, it holds ω = ωω,ω′ +1 +∧ ωω,ω′ +2 +and ω = ωω,ω′ +1 +∨ ωω,ω′ +2 +. Thus, it is also the case that +g(ω) + g(ω′) ≤ g(ωω,ω′ +1 +) + g(ωω,ω′ +2 +), sine the reward function g is IR-supermodular and hence supermodular (see +Definition 12). Let N1 := {i ∈ N : aS,i = a1,i}, and N2 := {i ∈ N : aS′,i = a1,i}. In order to conclude the proof, +23 + +ARXIV PREPRINT - FEBRUARY 1, 2023 +we show that the following holds: +Ra1 + Ra2 = +� +ω∈Ωn +�� +i∈N +Fi,a1,i,ωi +� +g(ω) + +� +ω∈Ωn +�� +i∈N +Fi,a2,i,ωi +� +g(ω) +(5a) += +� +ω∈Ωn +�� +i∈N +� +ω′∈Ω +Fi,a2,i,ω′µi,ω′ +ωi−ω′ +� +g(ω) + +� +ω∈Ωn +�� +i∈N +Fi,a2,i,ωi +� +g(ω) +(5b) += +� +ω∈Ωn +� +ω′∈Ωn +�� +i∈N +Fi,a2,i,ω′ +i +� �� +i∈N +µi,ω′ +i +ωi−ω′ +i +� +g(ω) + +� +ω∈Ωn +�� +i∈N +Fi,a2,i,ωi +� +g(ω) +(5c) += +� +ω′∈Ωn +�� +i∈N +Fi,a2,i,ω′ +i +� � � +ω∈Ωn +�� +i∈N +µi,ω′ +i +ωi−ω′ +i +� +g(ω) + g(ω′) +� +(5d) += +� +ω′∈Ωn +�� +i∈N +Fi,a2,i,ω′ +i +� � � +ω∈Ωn +�� +i∈N +µi,ω′ +i +ωi−ω′ +i +� +g(ω) + +� +ω∈Ωn +�� +i∈N +µi,ω′ +i +ωi−ω′ +i +� +g(ω′) +� +(5e) +≥ +� +ω′∈Ωn +�� +i∈N +Fi,a2,i,ω′ +i +� � +ω∈Ωn +�� +i∈N +µi,ω′ +i +ωi−ω′ +i +� � +g(ωω,ω′ +1 +) + g(ωω,ω′ +2 +) +� +(5f) += +� +ω′∈Ωn +�� +i∈N +Fi,a2,i,ω′ +i +� + + +� +ω∈Ωn: +ωi=ω′ +i∀i∈N2 +� +i∈N: +aS,i=a1,i +µi,ω′ +i +ωi−ω′ +i g(ωω,ω′ +1 +) ++ +� +ω∈Ωn: +ωi=ω′ +i∀i∈N1 +� +i∈N: +aS′,i=a1,i +µi,ω′ +i +ωi−ω′ +i g(ωω,ω′ +2 +) + + +(5g) += +� +ω∈Ωn + + + + +� +i∈N: +aS,i̸=a1,i +Fi,a2,i,ωi + + + + +� +ω′∈Ωn: +ωi=ω′ +i∀i∈N2 +� +i∈N: +aS,i=a1,i +Fi,a2,i,ω′ +i µi,ω′ +i +ωi−ω′ +i g(ωω,ω′ +1 +) ++ +� +ω′∈Ωn + + + + +� +i∈N: +aS′,i̸=a1,i +Fi,a2,i,ω′ +i + + + + +� +ω∈Ωn: +ωi=ω′ +i∀i∈N1 +� +i∈N: +aS′,i=a1,i +µi,ω′ +i +ωi−ω′ +i g(ωω,ω′ +2 +) +(5h) += +� +ω∈Ωn + + + + +� +i∈N: +aS,i̸=a1,i +Fi,a2,i,ωi + + + + +� +ω′∈Ωn: +ωi=ω′ +i∀i∈N2 +� +i∈N: +aS,i=a1,i +Fi,a2,i,ω′ +i µi,ω′ +i +ωi−ω′ +i g(ω) ++ +� +ω∈Ωn + + + + +� +i∈N: +aS′,i̸=a1,i +Fi,a2,i,ωi + + + + +� +ω′∈Ωn: +ωi=ω′ +i∀i∈N1 +� +i∈N: +aS′,i=a1,i +µi,ω′ +i +ωi−ω′ +i g(ω) +(5i) += +� +ω∈Ωn + + + + +� +i∈N: +aS,i̸=a1,i +Fi,a2,i,ωi + + + + + + + +� +i∈N: +aS,i=a1,i +� +ω′∈Ω +Fi,a2,i,ω′ +i µi,ω′ +i +ωi−ω′ + + + g(ω) ++ +� +ω∈Ωn + + + + +� +i∈N: +aS′,i̸=a1,i +Fi,a2,i,ωi + + + + + + + + +� +i∈N: +aS′,i=a1,i +� +ω′∈Ω +Fi,a2,i,ω′ +i µi,ω′ +i +ωi−ω′ + + + + g(ω) +(5j) += RaS + RaS′ , +(5k) +24 + +ARXIV PREPRINT - FEBRUARY 1, 2023 +where +Equation +(5b) +follows +from +the +definition +of +Ra1, +Equation +(5e) +comes +from +the +fact +that +� +ω∈Ωn +� +i∈N µi,ω′ +i +ωi−ω′ +i = 1, Equation (5f) holds since g(ω) + g(ω′) ≤ g(ωω,ω′ +1 +) + g(ωω,ω′ +2 +) by supermodular- +ity, Equation (5g) follows from ωω,ω′ +1 += ωω,ω′′ +1 +whenever ω′ +i = ω′′ +i for all i ∈ N1, ωω,ω′ +2 += ωω,ω′′ +2 +whenever +ω′ +i = ω′′ +i for all i ∈ N2, and � +ω∈Ω µi,ω′ +ωi−ω′ = 1, Equation (5i) comes from ωω,ω′ +1 += ωω,ω′′ +1 +whenever ω′ +i = ω′′ +i for +all i ∈ N1 and ωω,ω′ +2 += ωω,ω′′ +2 +whenever ω′ +i = ω′′ +i for all i ∈ N2, while Equation (5k) follows from the definition of +RaS+ RaS′. +Theorem 4. For principal-multi-agent problem instances that (i) have succinct rewards specified by an IR- +supermodular function and (ii) satisfy the FOSD condition, the problem of computing an optimal contract admits +a polynomial-time algorithm. +Proof. By Theorem 1, computing a utility-maximizing contract in a principal-multi-agent problem instance I := +(N, Ω, A) can be reduced in polynomial time to the problem of maximizing a suitably-defined set function f I over +a particular 1-partition matroid MI. Moreover, by Lemma 3, the function f I is ordered-supermodular whenever +the principal’s rewards are specified by an IR-supermodular function and the FOSD condition is satisfied. Hence, +Corollary 1 immediately provides a polynomial-time algorithm for finding a utility-maximizing contract. +D +Proof of Theorem 5 +To prove the theorem, we employ a reduction from a promise problem related to the problem of finding large indepen- +dent sets in graphs, whose definition follows. +Definition 13 (GAP-ISα). For every α ∈ [0, 1], we define GAP-ISα as the following promise problem: +• Input: An undirected graph G = (V, E) such that either one of the following is true: +– there exists an independent set (i.e., a subset of vertices such that there is no edge connecting two of +them) of size at least |V |1−α; +– all the independent sets have size at most |V |α. +• Output: Determine which of the above two cases hold. +GAP-ISα is known to be NP-hard for any α > 0 (H˚astad, 1999; Zuckerman, 2007). +Now, we are ready to prove of Theorem 5. +Proof of Theorem 5. Given a constant α > 0, we reduce from the problem GAP-ISα. +Our construction is such that, if the GAP-ISα instance G = (V, E) admits an independent set of size at least |V |1−α, +then the corresponding contract design problem admits a solution providing the principal with an overall expected +utility at least of δ|V |1−α, where δ will be defined in the following. Otherwise, if all the independent sets have size at +most |V |α, then any contract provides the principal with an overall expected utility at most of δ|V |α. Moreover, we +will see that in the multi-agent principal-agent problem in our reduction it holds n = |V |. Since GAP-ISα is NP-hard +for each constant α > 0 this is sufficient to prove the statement. +Construction. +Given an instance of GAP-ISα G = (V, E), we build an instance of the multi-agent principal-agent +problem as follows. For each vertex v ∈ V , there exists an agent nv with actions a1 and a0. The outcome space is +given by R+. Then, for each agent v ∈ V action a1 induces deterministically outcome ω1 = 1, i.e., Fnv,a1,ω1 = 1. +Moreover, action a1 has cost 1 − δ, i.e., cnv,a1 = 1 − δ, where δ = +1 +|V |2 . For each agent v ∈ V action a0 induces +deterministically outcome ω0 = 0, i.e., Fnv,a0mω0 = 1. Moreover, action a0 has cost 0, i.e., cnv,a0 = 0. Given a node +v ∈ V , let kv ≤ |V | be the degree of node v. Finally, the utility function g is defined as +g(ω) = +� +(u,v)∈E +max{ 1 +ku +ωu, 1 +kv +ωv}. +Notice that the function is DR-submodular since it is the sum of DR-submodular functions. Indeed, given and edge +e = (u, v), max{ 1 +ku ωu, 1 +kv ωv} is the maximum of two linear functions and hence is DR-submodular. +25 + +ARXIV PREPRINT - FEBRUARY 1, 2023 +Completeness. +Suppose that there exists an independent set V ∗ ⊆ V of G with size at least |V |1−α. We can build a +contracts (p, a), p ∈ Rn×m ++ +, a ∈ An such that for each v ∈ V ∗, it holds pnv,ω1 = 1 − δ, while all the other payments +are set to 0. Finally, we recommend to all the agent nv, v ∈ V ∗, to play a1, i.e., anv = a1, and to all the agents nv, +v ∈ V \ V ∗ to play a0, i.e., anv = a0. It is easy to see that the action profile a is such that ai is IC under p for each +agent i ∈ N. The total reward is +� +(u,v)∈E +max{ 1 +ku +ωu, 1 +kv +ωv} = +� +v∈V ∗ +� +u∈V :(u,v)∈E +max{ 1 +ku +ωu, 1 +kv +ωv} += +� +v∈V ∗ +� +u∈V :(u,v)∈E +1 +kv +1 += |V ∗||kv| 1 +kv += |V ∗|, +where the second inequality holds since for each v ∈ V ∗ we have that ωv = 1 and ωu = 0 for each u : (u, v) ∈ V ∗. +Moreover, the total payment is given by � +v∈V ∗(1 − δ) = |V ∗|(1 − δ). Thus, the total principal’s utility is given by +|V ∗| − (1 − δ)|V ∗| = δ|V ∗|. +Soundness. +We prove that, if all the independent sets of G have size at most |V |α, then the principal’s expected +utility is at most δ|V |α for any contract p ∈ R+n × m, a ∈ An. First, we show that if the contract incentivizes two +agents nu and nv with (u, v) ∈ E, i.e., u and v are adjacent vertexes, to play action a1, then the principal’s utility is +negative. Let ¯V be the set of nodes relative to agents incentivized to play a1, i.e., the set of i ∈ N such that ai = a1, +and ¯E be the set of edges connecting two nodes in ¯V . Then, the principal’s reward is at most +� +(u,v)∈E +max{ 1 +ku +ωu, 1 +kv +ωv} = +� +(u,v)∈E\ ¯ +E +max{ 1 +ku +ωu, 1 +kv +ωv} + +� +(u,v)∈ ¯ +E +max{ 1 +ku +ωu, 1 +kv +ωv} += +� +v∈ ¯V :(u,v)∈E\ ¯ +E +1 +kv ++ +� +(u,v)∈ ¯ +E +[ 1 +ku +ωu + 1 +kv +ωv − 1/|V |] += +� +v∈ ¯V :(u,v)∈E\ ¯ +E +1 +kv ++ +� +(u,v)∈ ¯ +E +[ 1 +ku +ωu + 1 +kv +ωv] − 1/|V | += +� +v∈ ¯V :(u,v)∈E\ ¯ +E +1 +kv ++ +� +v∈ ¯V :(u,v)∈ ¯ +E +1 +kv +− 1/|V | += +� +v∈ ¯V +� +u∈V :(u,v)∈E +1 +kv +− 1/|V | +≤ | ¯V | − 1/|V |. +At the same time the payment is at least (1 − δ)| ¯V | since for each agent nv, v ∈ ¯V it holds pnv,ω1 ≥ 1 − δ. Hence, +the principal’s utility is at most | ¯V | − 1/|V | − (1 − δ)| ¯V | = δ| ¯V | − 1/|V | < 0. +Hence, in any contract with positive utility there are not two agents nv, nu relative to adjacent vertexes, i.e., such that +(v, u) ∈ E playing action a1. Since all the independent sets has size at most |V |α, this implies that | ¯V | ≤ |V |α. Then, +the reward of the contract is given by � +v∈ ¯V +� +u:(u,v)∈E +1 +kv 1 = | ¯V |. Moreover, the payment is at least (1 − δ)| ¯V | +since for each agent nv, v ∈ ¯V it holds pnv,ω1 ≥ 1 − δ. However, the principal’s utility is at most | ¯V | − (1 − δ)| ¯V | = +δ| ¯V | ≤ δ|V |α. +E +Proofs Omitted from Section 5 +Lemma 4. Given an instance I := (N, Ω, A) of principal-multi-agent problem with succinct rewards specified by a +DR-submodular function, the extended set function f I (see Definition 7) can be defined as f I(S) := fI(S) + lI(S) +for every S ⊆ GI, where fI : 2GI → R+ is a monotone-increasing submodular function and lI : 2GI → R is a linear +function, both defined over the 1-partition matroid MI. +26 + +ARXIV PREPRINT - FEBRUARY 1, 2023 +Proof. By letting fI : 2GI → R+ and lI : 2GI → R be defined so that fI(S) := RS and lI(S) := � +(i,a)∈S �Pi,a for +every S ⊆ GI, in order to prove the statement it is sufficient to show that fI is a monotone-increasing submodular +function (notice that lI is linear by definition). +It is easy to check that fI is monotone-increasing, since the reward function g is increasing by assumption (see As- +sumption 2). Thus, we are left to show that fI is also submodular. +In the following, for ease of presentation, given an agent i ∈ N and an outcome ω ∈ Ω, we let ωi,ω ∈ Ωn be the +tuple of agents’ outcomes such that ωi,ω +i += ω and ωi,ω +j += ω∅ for all j ∈ N : j ̸= i. Moreover, given two tuples +ω, ω′ ∈ Ωn, we let ω + ω′ be the tuple whose i-th outcome is ωi + ω′ +i. +In order to prove that fI is submodular, we need to show that, for any two subsets S ⊂ S′ ⊆ GI and element +(i, a) ∈ GI, it holds fI(S ∪ {(i, a)}) − fI(S) ≤ fI(S′ ∪ {(i, a)}) − fI(S′): +fI(S′ ∪ {(i, a)}) − fI(S′) = RS∪{(i,a)} − RS′ += +� +ω∈˜Ωn +rω +� +j∈N +Fj,S′∪{(i,a)},ωj − +� +˜ω∈Ωn +rω +� +j∈N +Fj,S′,ωj += +� +ω∈˜Ωn +� +ω′∈˜Ωn +� +ω∈Ω +rω+ω′+ωi,ω Fi,a,ω + + � +j∈N +Fj,S,ωj + + + + � +j∈N +Fj,S′\S,ωj + + +− +� +ω∈˜Ωn +� +ω′∈˜Ωn +rω+ω′ + + � +j∈N +Fj,S,ωj + + + + � +j∈N +Fj,S′\S,ωj + + += +� +ω∈˜Ωn +� +ω′∈˜Ωn +� +ω∈Ω +Fi,a,ω + + � +j∈N +Fj,S,ωj + + + + � +j∈N +Fj,S′\S,ωj + + (rω+ω′+ωi,ω − rω+ω′) +≤ +� +ω∈˜Ωn +� +ω′∈˜Ωn +� +ω∈Ω +Fi,a,ω + + � +j∈N +Fj,S,ωj + + + + � +j∈N +Fj,S′\S,ωj + + (rω+ωi,ω − rω) += +� +ω∈˜Ωn +� +ω∈Ω +Fi,a,ω + + � +j∈N +Fj,S,ωj + + � +rω+1i(¯ω) − rω +� += +� +ω∈˜Ωn +� +ω∈Ω +Fi,a,ω + + � +j∈N +Fj,S,ωj + + rω+ωi,ω − +� +ω∈˜Ωn + + � +j∈N +Fj,S,ωj + + rω += +� +ω∈˜Ωn +rω +� +j∈N +Fj,S∪{(i,a),ωj} − +� +ω∈˜Ωn +rω +� +j∈N +Fj,S,ωj += fI(S ∪ {(i, a)}) − fI(S), +where the inequality hold by DR-submodularity. This concludes the proof. +Theorem 6. In principal-multi-agent problems with succinct rewards specified by a DR-submodular function, the +problem of computing an optimal contract admits a polynomial-time approximation algorithm that, for any ǫ > 0 +given as input, outputs a contract with principal’s expected utility at least (1 − 1/e)R(p,a∗) − P(p,a∗) − ǫ for any +contract (p, a∗) with high probability, where R(p,a∗) ∈ [0, 1], respectively P(p,a∗) ∈ R+, denotes the expected reward, +respectively payment, under (p, a∗). +Proof. The result easily follows by noticing that, thanks to Lemma 4, the problem is a specific case of the ones studied +in (Sviridenko et al., 2017). +In particular, Theorem 3.1 in (Sviridenko et al., 2017) shows that there exists a polynomial-time algorithm that, given +as input an ǫ > 0, a matroid M := (G, I), a monotone-increasing submodular function f : 2G → R+, and a linear +function l : 2G → R, outputs an independent set S ∈ I satisfying f(S) + l(S) ≥ (1 − 1/e)f(S′) + l(S′) − ǫˆv for every +S′ ∈ I with high probability, where, for ease of presentation, we let ˆv := max{maxx∈G(f({x}), maxx∈G |l({x})|}. +27 + +ARXIV PREPRINT - FEBRUARY 1, 2023 +It is easy to see that, by definition of reward function, it holds max(i,a)∈GI fI({(i, a)}) ≤ 1. Moreover, it is always +possible to build a matroid which is equivalent (for our purposes) to MI and satisfies max(i,a)∈GI |lI({(i, a)})| ≤ 1. +To do that, let ˜GI ⊆ GI be the set of elements (i, a) such that �Pi,a > 1. Then, any independent set including an +element of ˜GI cannot be optimal, since it has negative value (recall that the values of fI are in [0, 1]). Hence, we +can optimize over the matroid that only includes the elements in GI \ ˜GI, so that we get ˆv ≤ 1. This concludes the +proof. +F +Proofs Omitted from Section 6 +Lemma 5. Given an instance I := (N, Θ, Ω, A) of Bayesian principal-multi-agent problem and a DSIC menu +of randomized contracts, there always exists another DSIC menu of randomized contracts Γ += +(γθ)θ∈Θn +with at least the same principal’s expected utility such that, for every i ∈ N and θ ∈ Θn, it holds that +��� +pi | p ∈ supp(γθ) ∧ p ∈ Pi,θi,a��� ≤ 1 for all a ∈ A, where pi ∈ Rm ++ denotes the i-th row of matrix p (i.e., +the agent i’s payment scheme under contract p). +Proof. Consider a menu of randomized contract ˆγθ for each θ ∈ Θn such that given a ˆθ ∈ Θn and i ∈ N, there exists +two contracts p, p′ ∈ supp(γˆθ) such that pi ̸= p′ +i and {pi, p′ +i} ⊆ Pi,ˆθi,a. Let ¯p be such that ¯pj = pj for each j ̸= i and +¯pi = ˆγˆθ +pipi + ˆγˆθ +p′ +ip′ +i. Moreover, let ¯p′ be such that ¯p′ +j = p′ +j for each j ̸= i and ¯p′ +i = ˆγˆθ +pipi + ˆγˆθ +p′ +ip′ +i. We build a DSIC +menu of randomized contracts γθ for each θ ∈ Θn with at least the same principal’s utility such that +• γθ = ˆγθ for each θ ̸= ˆθ, +• γˆθ +¯p = ˆγˆθ +p + ˆγˆθ +¯p , +• γˆθ +¯p′ = ˆγˆθ +p′ + ˆγˆθ +¯p′, +• γˆθ +ˆp = ˆγˆθ +ˆp for each p /∈ {¯p, ¯p′, p}. +It is easy to see that γθ satisfies +��� +pi | p ∈ supp(γθ) ∧ p ∈ Pi,θi,a��� < +��� +pi | p ∈ supp(ˆγθ) ∧ p ∈ Pi,θi,a��� − 1. +Moreover, γθ for each θ ∈ Θn provides the same utility of ˆγθ for each θ ∈ Θn. To conclude the proof we need to +proof that γθ for each θ ∈ Θn is DSIC. Following an analysis similar to the one in Castiglioni et al. (2022a) we can +show that replacing the marginal contracts pi and p′ +i with the weighted combination ¯pi the DSIC constraint continue +to hold. Applying this operation until such two contracts does not exist is sufficient to prove the statement. +Lemma 6. There exists a function τ : N → R such that τ(x) is O(2poly(x))—with poly(x) being a polynomial in +x—and, for every instance I := (N, Θ, Ω, A) of Bayesian principal-multi-agent problem, agent i ∈ N, type θ ∈ Θ, +and inducible action a ∈ Ai,θ, there exists a contract p ∈ Rn×m ++ +such that a ∈ A∗ +i,θ(p) and pi,ω ≤ τ(|I|) for all +ω ∈ Ω, where |I| is the size of instance I.20 +Proof. For every I := (N, Θ, Ω, A), agent i ∈ N, type θ ∈ Θ, and inducible action a ∈ Ai,θ, the set Pi,θ,a of +contracts under which action a is IC can be defined by means of a system of linear inequalities such that its number of +variables, its number of inequalities, and the size of the binary representation of its coefficients can all be bounded by +polynomials in |I|. Hence, given that Pi,θ,a cannot be empty (otherwise a ∈ Ai,θ would be contradicted), there must +exist a contract p ∈ Pi,θ,a such that, for every outcome ω ∈ Ω, the payment pi,ω is upper bounded by a O(2poly(|I|)) +term, where poly(|I|) is a polynomial in the size |I| (in terms of number of bits) of instance I (this easily follows from +standard LP arguments, see, e.g., (Bertsimas and Tsitsiklis, 1997)). The result is readily proved by choosing a suitable +function τ : N → R so that such upper bound holds for every instance I, agent i ∈ N, type θ ∈ Θ, and inducible +action a ∈ Ai,θ. +Lemma 7. For every instance of Bayesian principal-multi-agent problem, it holds LP ≥ SUP. +20In the rest of the section, we always assume that the size of a problem instance is expressed in terms of number of bits. +28 + +ARXIV PREPRINT - FEBRUARY 1, 2023 +Proof. To prove the result, we show that, given any feasible solution to Problem (1), it is possible to recover a feasible +solution to LP (2) having the same objective function value. +Let (tθ,a, ξi,θ,a, pi,θ,a,ω) be a feasible solution to Problem (1). Then, we define a solution to LP (2) by letting +yi,θ,a,ω = ξi,θ,a pi,θ,a,ω for every agent i ∈ N, tuple of agents’ types θ ∈ ˜Θn, inducible action a ∈ Ai,θi, and +outcome ω ∈ Ω. Additionally, all the variables that also appear in Problem (1) keep their values, while variables +γi,θ,θ,,a are defined so that they are equal to their corresponding terms in the sums appearing in the right-had sides of +Constraints (1b). It is immediate to see that the solution defined above is indeed feasible for LP (2), after noticing that, +for every i ∈ N, θ ∈ ˜Θn, and action a ∈ A which is not inducible for an agent i of type θi, it holds that ξi,θ,a = 0. +Indeed, since Constraints (1b) hold, if ξi,θ,a > 0 then there exists a contract p ∈ Rn×m ++ +under which action a is IC for +an agent i of type θi, and, thus, a ∈ Ai,θi. +It is easy to check that the feasible solution to LP (2) defined above has exactly the same objective function value as +its corresponding feasible solution to Problem (1). Thus, the result is readily proved by observing that the objective +functions of Problem (1) and LP (2) are continuous and, for any ǫ > 0, there always exists a feasible solution to +Problem (1) with value at least SUP − ǫ. +Lemma 8. Given an instance I := (N, Θ, Ω, A) of Bayesian principal-multi-agent problem and an irregular solution +to LP (2) with value VAL, for any ǫ > 0, it is possible to recover a regular solution to LP (2) with value at least +VAL − ǫ(n τ(|I|) + 1) in time polynomial in |I| and 1 +ǫ, where τ is a function defined as per Lemma 6 and |I| denotes +the size of instance I. +Proof. Let (tθ,a, ξi,θ,a, yi,θ,a,ω, γi,θ,θ,,a) be a feasible solution to LP (2). Moreover, let us define W as the set of +tuples w = (i, θ, a) such that yi,θ,a,ω > 0 and ξi,θ,a = 0 (i.e., the tuples of indexes identifying the pairs of variables +that do not meet regularity conditions). +As a first step, we show that, for every tuple w = (i, θ, a) ∈ W, it is possible to build a feasible solution to LP (2), +which we refer to as +� +tw +θ,a, ξw +i,θ,a, yw +i,θ,a,ω, γw +i,θ,θ,,a +� +for clarity of exposition, such that its corresponding DSIC menu +of randomized contracts always recommends action a with probability 1 to an agent i that truthfully reports their type +to be θi. Since a ∈ Ai,θi thanks to how LP (2) is constructed, Lemma 6 says that there exists a contract pw ∈ Rn×m ++ +(depending on the tuple w = (i, θ, a)) such that a ∈ A∗ +i,θi(pw) and pw +i,ω ≤ τ(|I|) for all ω ∈ Ω, where τ : N → R +is a suitably-defined function such that τ(x) is O(2poly(x)). Then, let us define yw +i,θ,a,ω = pw +i,ω for all ω ∈ Ω, while +ξw +i,θ,a = 1. Additionally, for every θ′ ∈ ˜Θn and j ∈ N such that (θ′, j) ̸= (θ, i), by letting a′ ∈ Aj,θ′ +j be any action +that is inducible for an agent j of type θ′ +j, we define ξw +i,θ′,a′ = 1. Finally, we let tw +θ,a = 1. It is easy to check that, by +suitably defining all the unspecified variables, the solution +� +tw +θ,a, ξw +i,θ,a, yw +i,θ,a,ω, γw +i,θ,θ,,a +� +is feasible for LP (2) and it +has value at least −n τ(|I|). +In conclusion, for any ǫ > 0, let us consider a solution +� +t′ +θ,a, ξ′ +i,θ,a, y′ +i,θ,a,ω, γ′ +i,θ,θ,,a +� +to LP (2) whose components +are defined as follows (by applying operations component wise): +(1 − ǫ) (tθ,a, ξi,θ,a, yi,θ,a,ω, γi,θ,θ,,a) + +� +w∈W +ǫ +|W| +� +tw +θ,a, ξw +i,θ,a, yw +i,θ,a,ω, γw +i,θ,θ,,a +� +. +It is easy to see that such a solution is feasible for LP (2), it is regular, and its objective function value is at least +VAL − ǫ(n τ(|I|) + 1), where VAL is the value of the original (irregular) solution. Moreover, it can be computed in +time polynomial in |I| and 1 +ǫ, concluding the proof. +Theorem 7. Given an instance I := (N, Θ, Ω, A) of Bayesian principal-multi-agent problem and an optimal solution +to LP (2), for any ǫ > 0, it is possible to recover a feasible solution to Problem (1) with value at least SUP−ǫ(n τ(|I|)+ +1) in time polynomial in |I| and 1 +ǫ, where τ is a function defined as per Lemma 6 and |I| denotes the size of instance +I. +Proof. First, let us recall that, given any regular feasible solution (tθ,a, ξi,θ,a, yi,θ,a,ω, γi,θ,θ,,a) to LP (2), it is suffi- +cient to set pi,θ,a,ω = yi,θ,a,ω/ξi,θ,a for every i ∈ N, θ ∈ ˜Θn, a ∈ Ai,θi, and ω ∈ Ω, in order to recover a feasible +solution to Problem (1) having the same value. Thus, if the given optimal solution to LP (2) is regular, then the result +immediately follows by Lemma 7, since LP = SUP. Instead, if the given optimal solution is irregular, by applying +Lemma 8 we can recover a regular solution to LP (1) with value at least LP − ǫ(n τ(|I|) + 1) in time polynomial in |I| +and 1 +ǫ, and from that we can easily obtain a feasible solution to Problem (1) with value at least SUP − ǫ(n τ(|I|) + 1) +(using the bound in Lemma 7), proving the result. +29 + +ARXIV PREPRINT - FEBRUARY 1, 2023 +Lemma 9. For every instance of Bayesian principal-multi-agent problem, LP (2) and LP (3) have the same optimal +value. Moreover, given a feasible solution to LP (3), it is always possible to recover in polynomial time a feasible +solution to LP (2) having at least the same value. +Proof. Since LP (3) is a relaxation of LP (2), in order to prove the statement it is sufficient to show that, given a +feasible solution to LP (3), it is possible to build a feasible solution to LP (2) having at least the same value, in time +polynomial in the size of the instance. +Let (tθ,a, ξi,θ,a, yi,θ,a,ω, γi,θ,θ,,a) be a feasible solution to LP (3). +As a first step, we show that, for every tuple of agents’ types θ ∈ ˜Θn, it is possible to compute new values for (some +of the) variables tθ,a so as to obtain a new set of variables ˆtθ,a for a ∈ An,θ such that: (i) � +a∈An,θ:ai=a ˆtθ,a = +ξi,θ,a − � +a:ai=a tθ,a for every i ∈ N and a ∈ Ai,θi, (ii) ˆtθ,a ≥ 0 for all a ∈ An,θ, and (iii) the new values can be +computed in polynomial time. For ease of presentation, for every tuple of agents’ types θ ∈ ˜Θn, agent i ∈ N, and +action a ∈ Ai,θi, let δi,θ,a := ξi,θi,a − � +a∈An,θ:ai=a tθ,a. Moreover, let δθ := 1 − � +a∈An,θ tθ,a. Then, for every +agent i ∈ N, it holds: +� +a∈Ai,θi +δi,θ,a = +� +a∈Ai,θi +ξi,θi,a − +� +a∈Ai,θi +� +a∈An,θ:ai=a +tθ,a = 1 − +� +a∈An,θ +tθ,a = δθ. +Now, let ¯tθ,a or a ∈ An,θ be variable values identifying a probability distribution over action profiles in An,θ having +marginal probabilities equal to δi,θ,a/δθ, i.e., for every i ∈ N and a ∈ Ai,θi, it holds � +a∈An,θ:ai=a ¯tθ,a = δi,θ,a/δθ. +Notice that, since � +a∈Ai,θi δi,θ,a = δθ for every i ∈ N, the marginal probabilities are well defined and the (joint) +probability distribution exists. Moreover, it is easy to see that such values ¯tθ,a can be computed in polynomial time, +since there always exists a probability distribution as desired having a polynomially-sized support. Then, let us define +ˆtθ,a = δθ¯tθ,a for every a ∈ An,θ. Notice that such values satisfy all the conditions (i)–(iii), since +� +a:ai=a +ˆtθ,a = δθ +� +a:ai=a +¯tθ,a = δi,θ,a = ξi,θ,a − +� +a:ai=a +tθ,a, +and ˆtθ,a ≥ 0 for all a ∈ An,θ, as the values ¯tθ,a identify a probability distribution. +Now, let us consider a new solution to LP (3), namely (t′ +θ,a, ξi,θ,a, yi,θ,a,ω, γi,θ,θ,,a), which is such that, for every +θ ∈ ˜Θn and a ∈ An,θ, it holds t′ +θ,a = tθ,a + ˆtθ,a. In the rest of the proof, we show that the solution defined above is +feasible for LP (2) and it has at least the same objective function value as the original feasible solution to LP (3). +First, it is easy to check that the objective value does not decrease, since each t′ +θ,a increases its value with respect to +tθ,a and such variables appear with non-negative coefficients in the objective function. Second, for every agent i ∈ N, +tuple of agents’ types θ ∈ ˜Θn, and action a ∈ Aiθi, it holds: +� +a∈An,θ:ai=a +t′ +θ,a = +� +a∈An,θ:ai=a +� +tθ,a + ˆtθ,a +� += +� +a∈An,θ:ai=a +tθ,a + ξi,θ,a − +� +a∈An,θ:ai=a +tθ,a = ξi,θ,a, +where the second-to-last equality comes from condition (i) on ˆtθ,a. Thus, such a solution is also feasible for LP (2). +Moreover, it is easy to see that it can be computed in polynomial time. +Theorem 8. Given access to an approximate separation oracle Oα(·, ·, ·, ·) with α ∈ (0, 1], there exists an algorithm +that, given any ρ > 0 and instance of Bayesian principal-multi-agent problem as input, returns a DSIC menu of +randomized contracts with principal’s expected utility at least α RΓ − PΓ − ρ for every menu of randomized contracts +Γ = {γθ}θ∈Θn, where RΓ ∈ [0, 1], respectively PΓ ∈ R+, denotes the expected reward, respectively the expected +overall payment, of Γ. Moreover, such an algorithm runs in time polynomial in the instance size and 1 +ρ. +Proof. We start by providing the general procedure underlining the approximation algorithm (see Algorithm 1). The +algorithm implements a binary search scheme to find a value η⋆ ∈ [0, 1] such that a feasibility-version of LP (4) with +the objective constrained to be at most η⋆ is “approximately” feasible, while the same problem with the objective +constrained to be at most η⋆ − β is infeasible. The constant β ≥ 0 will be specified later in the proof. +Algorithm 1 requires log(β) steps and, at each step, it works by determining, for a given value η ∈ [0, 1], whether +there exists an “approximately” feasible solution to the following feasibility-version of LP (4)—called F for ease of +30 + +ARXIV PREPRINT - FEBRUARY 1, 2023 +presentation—, which is obtained by dropping the objective function from LP (4) and adding a constraint enforcing +that the value of the objective is at most η. +F + + + +� +i∈N +� +θ∈Θ +� +θ−i∈ ˜Θn +−i +xi,θ,θ−i ≤ η +Constraints (4b)—(4i). +The algorithm is initialized with l = 0, h = 1. At each iteration of the binary search scheme, the feasibility problem F +with objective ≤ η = l+h +2 +is solved via an ad hoc implementation of the ellipsoid method (see the following for more +details). If +F is found to be infeasible by the ellipsoid method, the algorithm sets l ← η. Otherwise, if +F is found +to be “approximately” feasible, the algorithm sets h ← η. Then, the procedure is repeated with the updated values +of l and h, and it terminates when it determines a value η⋆ = h such that F with objective ≤ η⋆ is “approximately” +feasible and F with objective ≤ η⋆ − β is infeasible.21 +In the following, we first describe in details the ad hoc implementation of the ellipsoid method employed by Al- +gorithm 1. Then, we provide a bound on the principal’s expected utility in an optimal DISC menu of randomized +contracts in terms of the value η⋆ found by Algorithm 1, as well as a η⋆-depending bound on the value of the solution +returned by Algorithm 1. Finally, we put all the bounds together in order to prove the statement of the theorem. +Algorithm 1 Approximation algorithm introduced in the proof of Theorem 8 +Input: Bayesian principal-multi-agent problem instance I := (N, Θ, Ω, A); Multiplicative approximation factor +α ∈ (0, 1]; Additive approximation error ρ > 0; Approximate separation oracle Oα(·, ·, ·, ·) for Constraints (4e) +1: Initialization: l ← 0; h ← 1; H ← ∅; H⋆ ← ∅; β ← ρ +4; ǫ ← +ρ +4| ˜Θn| +2: while h − l > β do +3: +η ← h+l +2 +4: +Run ad hoc ellipsoid method on +F with objective ≤ η, using additive error ǫ as input in the calls to the +approximate separation oracle Oα(·, ·, ·, ·) for Constraints (4e) +5: +H ← {Constraints (4e) found to be violated during the ellipsoid method} +6: +if ellipsoid method returned infeasible then +7: +l ← η; H⋆ ← H +8: +else +9: +h ← η +10: return η⋆ ← h; Optimal solution to LP (3) in which only the variables tθ,a corresponding to the dual constraints +in H⋆ are specified +Implementation of the ellipsoid method +Given a point (xi,θ,θ−i, yi,θ,a, zi,θ,θ,a,a′, di,θ,θ−i) in the variable domain +of LP (4) and η ∈ [0, 1], our implementation of the ellipsoid method employs an ad hoc separation oracle to determine +whether the point (xi,θ,θ−i, yi,θ,a, zi,θ,θ,a,a′, di,θ,θ−i) is “approximately” feasible for problem F with objective ≤ η +or there exists a constraint of +F that is violated in such a point. In the latter case, the separation oracle returns the +violated constraint. +First, the oracle checks if one among Constraints (4b)—(4d) and Constraints (4f)—(4i) is violated, which can be +done in polynomial time by checking them one by one, since such constraints are polynomially many. If a violated +constraint is found, the oracle returns it. +If all the Constraints (4b)—(4d) and the Constraints (4f)—(4i) are not violated, the oracle has to check the +exponentially-many Constraints (4e). In order to do so, for every tuple of agents’ types θ ∈ ˜Θn, the oracle runs +the procedure Oα(·, ·, ·, ·), feeding it with the following inputs: the instance I, the tuple θ, weights w ∈ Rnℓ such that +wi,a = min{yi,θ,a, 2} for all i ∈ N and a ∈ Ai,θi, and an additive error ǫ. If the call Oα(I, w, θ, ǫ) returns a tuple of +agents’ actions a ∈ An,θ such that λθRθ,a − � +i∈N wi,ai ≤ 0, then it also holds that αλθRθ,a − � +i∈N yi,θ,ai ≤ ǫ +for every a ∈ An,θ, since wi,ai ≤ yi,θ,ai for all i ∈ N by definition. If this happens for every tuple of agents’ types +θ ∈ ˜Θn, the separation oracle then concludes that F is “approximately” feasible, meaning that Constraints (4e) are +satisfied up to a reward-multiplying approximation factor α and an additive error ǫ. Instead, if for some θ ∈ ˜Θn +the call Oα(I, w, θ, ǫ) returns an action profile a ∈ An,θ such that λθRθ,a − � +i∈N wi,ai > 0, then it must be +21Notice that, in the case in which the ad hoc implementation of the ellipsoid method concludes that +F is “approximately” +feasible at every iteration, or, similarly, when it always returns infeasible, we can perform a similar analysis by observing that there +always exists a feasible solution with objective 1, while all the feasible solutions have objective at least 0. +31 + +ARXIV PREPRINT - FEBRUARY 1, 2023 +wi,ai ≤ 1 for all i ∈ N (as rewards belong to [0, 1]). Hence, it must be wi,ai = yi,θ,ai for all i ∈ N, and, thus, +λθRθ,a − � +i∈N yi,θ,ai = λθRθ,a − � +i∈N wi,ai > 0. Then, the separation oracle concludes that the feasibility +problem F is infeasible and outputs the constraint in Constraints (4e) related to θ ∈ ˜Θn and a ∈ An,θ. +Bounding the value of an optimal solution +Next, prove that αROPT − POPT − |˜Θn|ǫ ≤ η⋆ for any optimal DSIC +menu of randomized contracts. +First, let us recall that the binary search scheme in Algorithm 1 terminates with an η⋆ ∈ [0, 1] such that the ad hoc im- +plementation of the ellipsoid method applied to F with objective ≤ η⋆ concludes that the problem is “approximately” +feasible. This implies that there exists a point (xi,θ,θ−i, yi,θ,a, zi,θ,θ,a,a′, di,θ,θ−i) in the variable domain of LP (4) such +that Constraints (4b)—(4d) and Constraints (4f)—(4i) are satisfied and, additionally, αλθRθ,a − � +i∈N yi,θ,ai ≤ ǫ +for every tuple of agents’ types θ ∈ ˜Θn and tuple of agents’ actions a ∈ An,θ. +In the following, +we define a modified version of LP (4) (see LP (6) below) and we show that +(xi,θ,θ−i, yi,θ,a, zi,θ,θ,a,a′, di,θ,θ−i) is a feasible solution to such a problem having value at most η⋆. +min +� +i∈N +� +θ∈Θ +� +θ−i∈ ˜Θn +−i +xi,θ,θ−i +s.t. +(6a) +α λθ Rθ,a − +� +i∈N +yi,θ,ai ≤ ǫ +∀θ ∈ ˜Θn, ∀a ∈ An,θ +(6b) +Constraints (4b)—(4d) and (4f)—(4i). +Since Constraints (4b)—(4d) and (4f)—(4i) are satisfied by (xi,θ,θ−i, yi,θ,a, zi,θ,θ,a,a′, di,θ,θ−i), we only need to +show that also Constraints (6b) are satisfied by such a point. By contradiction, suppose that Constraint (6b) relative +to θ ∈ ˜Θn and a ∈ An,θ is violated by (xi,θ,θ−i, yi,θ,a, zi,θ,θ,a,a′, di,θ,θ−i). Then, it must the case that αλθRθ,a − +� +i∈N yi,θ,ai − ǫ > 0, contradicting the fact the ellipsoid method classified problem +F with objective ≤ η⋆ as +“approximately” feasible. This shows that (xi,θ,θ−i, yi,θ,a, zi,θ,θ,a,a′, di,θ,θ−i) is feasible for LP (6). +The dual formulation of LP (6) reads as follows: +max +� +θ∈ ˜Θn +� +a∈An,θ +tθ,a (αλθRθ,a − ǫ) − +� +i∈N +� +θ∈ ˜Θn +λθ +� +a∈Ai,θi +� +ω∈Ω +Fi,θi,a,ω yi,θ,a,ω +s.t. +(7a) +� +a∈An,θ:ai=a +tθ,a ≤ ξi,θ,a +∀i ∈ N, ∀θ ∈ ˜Θn, ∀a ∈ Ai,θi +(7b) +Constraints (2b)—(2d) and (2f)—(2i), +where, for ease of presentation, we used the same variable names as in LP (3). +By strong duality, we have that the optimal value of LP (7) is at most η⋆. Then, since an optimal DSIC menu of +randomized contracts identifies an optimal solution to LP (3) and such a solution is clearly feasible for LP (7), we have +that the optimal value of LP (7) is at least +α ROPT − POPT − |˜Θn|ǫ, +(8) +where we used the fact that, in any feasible solution, it holds � +a∈An,θ tθ,a ≤ 1 for every θ ∈ ˜Θn. This proves that +αROPT − POPT − |˜Θn|ǫ ≤ η⋆. +Bounding the value of the solution returned by Algorithm 1 +Next, we show that Algorithm 1 gives as output a +solution with value at least η⋆ − β. +Let H⋆ ⊂ ˜Θn × An be the set of tuples of agents’ types and tuples of actions corresponding to Constraints (4e) +which are identified as violated by the ad hoc ellipsoid method during the last iteration of the binary search scheme in +which it returned infeasible. It is immediate to see that, during such an iteration, the ellipsoid method is applied to the +feasibility problem F with objective ≤ l with l ≤ η⋆ − β, by definition of η⋆ and given how the binary search scheme +terminates. +LP (4) with only the Constraints (4e) corresponding to elements in H⋆ (and all the other Constraints (4b)—(4d) +and (4f)—(4i)) is infeasible, and the ellipsoid method guarantees that the elements in H⋆ are polynomially many. +Moreover, the dual of such an LP is LP (3) in which only the variables tθ,a corresponding to the elements in H⋆ are +32 + +ARXIV PREPRINT - FEBRUARY 1, 2023 +specified. Formally, it can be written as: +max +� +(θ,a)∈H⋆ +λθtθ,a Rθ,a − +� +i∈N +� +θ∈ ˜Θn +λθ +� +a∈Ai,θi +� +ω∈Ω +Fi,θi,a,ω yi,θ,a,ω +s.t. +(9a) +� +a∈An,θ:ai=a∧(θ,a)∈H +tθ,a ≤ ξi,θ,a +∀i ∈ N, ∀θ ∈ ˜Θn, ∀a ∈ Ai,θi +(9b) +Constraints (2b)—(2d) and (2f)—(2i). +By strong duality, LP (9) has optimal value at least η⋆ − β. Moreover, an optimal solution can be computed in poly- +nomial time since there are polynomially-many constraints in H⋆ and, thus, LP (9) has polynomially-many variables +and constraints. +Putting all together +We conclude the proof by providing the desired approximation guarantees for an optimal solu- +tion to LP (9) returned by Algorithm 1. Let APX be the value of an optimal solution to LP (9). Moreover, set β := ρ +4 +and ǫ := +ρ +4| ˜Θn|. Then, +APX ≥ η⋆ − β +≥ αROPT − POPT − |˜Θn|ǫ − β +≥ αROPT − POPT − ρ +2. +Finally, given a feasible solution to LP (9), by applying Lemma 9, we can recover in polynomial time a feasible +solution to LP (2) with the same objective function value (notice that any solution that is feasible for LP (9) is also +feasible for LP (3)). Then, by applying Theorem 7 with ǫ = +ρ +nτ(|i|)+1 to the just computed solution, we can recover in +polynomial time a feasible solution to Problem (1), which corresponds to a DISC menu of randomized contracts with +principal’s expected utility at least APX − ρ +2 − ǫ(n τ(|I|) + 1) = αROPT − POPT − ρ, concluding the proof. +Corollary 3. In Bayesian principal-multi-agent problem instances that (i) have succinct rewards specified by an IR- +supermodular function and (ii) satisfy the FOSD condition, for any ρ > 0, the problem of computing an optimal menu +of randomized contracts admits an algorithm returning a menu with principal’s expected utility at least OPT − ρ in +time polynomial in the instance size and 1 +ρ, where OPT is the value of the optimal principal’s expected utility. +Proof. We show that the problem admits a polynomial-time approximate separation oracle O1(·, ·, ·, ·). Then, the +result directly follows from Theorem 8. +A call O1(I, w, θ, ǫ) to the approximation oracle can simply implement the polynomial-time algorithm for non- +Bayesian problem (see Theorem 4). Indeed, it is sufficient to rescale the function g (and hence the rewards) by a +factor λθ, while replacing each value �Pi,a with the weight wi,a. +It is easy to see that the arguments proving Theorem 4 continue to hold. +Corollary 4. In Bayesian principal-multi-agent problem instances with succinct rewards specified by a DR- +submodular function, the problem of computing an optimal menu of randomized contracts admits a polynomial-time +approximation algorithm which, for any ǫ > 0 given as input, outputs a menu providing the principal with an expected +utility at least of (1 − 1/e)RΓ − PΓ − ǫ for each menu of randomized contracts Γ = {γθ}θ∈Θn with high probability, +where RΓ ∈ [0, 1], respectively PΓ ∈ R+, denotes the expected reward, respectively the expected payment, in contract +p. +Proof. We show that the problem admits a polynomial-time approximate separation oracle O1−1/e(·, ·, ·, ·). Then, the +result readily follows from Theorem 8. +In particular, a call O1−1/e(I, w, θ, ǫ) to the oracle oracle can be implemented by means of the polynomial-time +approximation algorithm introduced for non-Bayesian instances (see Theorem 6). Indeed, we can rescale the reward +function g (and hence the rewards) by a factor λθ, while we can replace the value �Pi,a with the weights wi,a. It is easy +to see that Theorem 6 continues to hold. +Finally, by Theorem 6, it is easy to show that the approximation guarantees of the oracle hold with high probability +Indeed, it is sufficient to apply a union bound over the polynomially-many calls to the oracle in order to get that the +approximation guarantees of all the calls hold simultaneously with high probability, proving the result. +33 +